Jan 30 06:44:45 crc systemd[1]: Starting Kubernetes Kubelet... Jan 30 06:44:45 crc restorecon[4464]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:45 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 06:44:46 crc restorecon[4464]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 06:44:46 crc restorecon[4464]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 30 06:44:46 crc kubenswrapper[4520]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 06:44:46 crc kubenswrapper[4520]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 30 06:44:46 crc kubenswrapper[4520]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 06:44:46 crc kubenswrapper[4520]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 06:44:46 crc kubenswrapper[4520]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 06:44:46 crc kubenswrapper[4520]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.550403 4520 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554555 4520 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554574 4520 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554579 4520 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554583 4520 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554588 4520 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554593 4520 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554598 4520 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554602 4520 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554605 4520 feature_gate.go:330] unrecognized feature gate: Example Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554610 4520 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554613 4520 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554619 4520 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554623 4520 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554629 4520 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554633 4520 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554638 4520 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554644 4520 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554654 4520 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554659 4520 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554662 4520 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554665 4520 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554669 4520 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554672 4520 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554675 4520 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554679 4520 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554690 4520 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554694 4520 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554697 4520 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554700 4520 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554703 4520 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554706 4520 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554709 4520 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554713 4520 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554715 4520 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554719 4520 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554722 4520 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554725 4520 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554728 4520 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554733 4520 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554737 4520 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554741 4520 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554745 4520 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554748 4520 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554752 4520 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554756 4520 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554759 4520 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554762 4520 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554766 4520 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554770 4520 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554774 4520 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554777 4520 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554780 4520 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554783 4520 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554787 4520 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554790 4520 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554792 4520 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554796 4520 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554799 4520 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554802 4520 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554805 4520 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554808 4520 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554812 4520 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554817 4520 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554819 4520 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554823 4520 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554826 4520 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554830 4520 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554833 4520 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554836 4520 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554838 4520 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.554841 4520 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.554915 4520 flags.go:64] FLAG: --address="0.0.0.0" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.554922 4520 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.554927 4520 flags.go:64] FLAG: --anonymous-auth="true" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.554932 4520 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.554936 4520 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.554939 4520 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.554944 4520 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.554947 4520 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.554951 4520 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.554956 4520 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.554959 4520 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.554963 4520 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.554966 4520 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.554970 4520 flags.go:64] FLAG: --cgroup-root="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.554973 4520 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.554977 4520 flags.go:64] FLAG: --client-ca-file="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.554981 4520 flags.go:64] FLAG: --cloud-config="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.554984 4520 flags.go:64] FLAG: --cloud-provider="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.554987 4520 flags.go:64] FLAG: --cluster-dns="[]" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.554993 4520 flags.go:64] FLAG: --cluster-domain="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.554997 4520 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555002 4520 flags.go:64] FLAG: --config-dir="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555005 4520 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555010 4520 flags.go:64] FLAG: --container-log-max-files="5" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555014 4520 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555019 4520 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555023 4520 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555027 4520 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555031 4520 flags.go:64] FLAG: --contention-profiling="false" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555035 4520 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555038 4520 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555041 4520 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555045 4520 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555049 4520 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555053 4520 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555057 4520 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555060 4520 flags.go:64] FLAG: --enable-load-reader="false" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555063 4520 flags.go:64] FLAG: --enable-server="true" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555067 4520 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555071 4520 flags.go:64] FLAG: --event-burst="100" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555075 4520 flags.go:64] FLAG: --event-qps="50" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555079 4520 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555083 4520 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555086 4520 flags.go:64] FLAG: --eviction-hard="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555091 4520 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555094 4520 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555098 4520 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555101 4520 flags.go:64] FLAG: --eviction-soft="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555104 4520 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555108 4520 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555111 4520 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555114 4520 flags.go:64] FLAG: --experimental-mounter-path="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555118 4520 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555121 4520 flags.go:64] FLAG: --fail-swap-on="true" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555124 4520 flags.go:64] FLAG: --feature-gates="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555128 4520 flags.go:64] FLAG: --file-check-frequency="20s" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555131 4520 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555134 4520 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555138 4520 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555141 4520 flags.go:64] FLAG: --healthz-port="10248" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555145 4520 flags.go:64] FLAG: --help="false" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555148 4520 flags.go:64] FLAG: --hostname-override="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555151 4520 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555155 4520 flags.go:64] FLAG: --http-check-frequency="20s" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555158 4520 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555162 4520 flags.go:64] FLAG: --image-credential-provider-config="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555165 4520 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555168 4520 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555172 4520 flags.go:64] FLAG: --image-service-endpoint="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555175 4520 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555179 4520 flags.go:64] FLAG: --kube-api-burst="100" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555182 4520 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555186 4520 flags.go:64] FLAG: --kube-api-qps="50" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555190 4520 flags.go:64] FLAG: --kube-reserved="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555194 4520 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555197 4520 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555200 4520 flags.go:64] FLAG: --kubelet-cgroups="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555203 4520 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555207 4520 flags.go:64] FLAG: --lock-file="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555210 4520 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555214 4520 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555217 4520 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555222 4520 flags.go:64] FLAG: --log-json-split-stream="false" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555225 4520 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555228 4520 flags.go:64] FLAG: --log-text-split-stream="false" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555231 4520 flags.go:64] FLAG: --logging-format="text" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555235 4520 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555238 4520 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555242 4520 flags.go:64] FLAG: --manifest-url="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555245 4520 flags.go:64] FLAG: --manifest-url-header="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555251 4520 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555255 4520 flags.go:64] FLAG: --max-open-files="1000000" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555258 4520 flags.go:64] FLAG: --max-pods="110" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555262 4520 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555265 4520 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555269 4520 flags.go:64] FLAG: --memory-manager-policy="None" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555272 4520 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555276 4520 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555279 4520 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555282 4520 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555290 4520 flags.go:64] FLAG: --node-status-max-images="50" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555294 4520 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555298 4520 flags.go:64] FLAG: --oom-score-adj="-999" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555302 4520 flags.go:64] FLAG: --pod-cidr="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555306 4520 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555312 4520 flags.go:64] FLAG: --pod-manifest-path="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555317 4520 flags.go:64] FLAG: --pod-max-pids="-1" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555320 4520 flags.go:64] FLAG: --pods-per-core="0" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555323 4520 flags.go:64] FLAG: --port="10250" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555327 4520 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555331 4520 flags.go:64] FLAG: --provider-id="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555334 4520 flags.go:64] FLAG: --qos-reserved="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555337 4520 flags.go:64] FLAG: --read-only-port="10255" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555340 4520 flags.go:64] FLAG: --register-node="true" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555344 4520 flags.go:64] FLAG: --register-schedulable="true" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555347 4520 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555353 4520 flags.go:64] FLAG: --registry-burst="10" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555356 4520 flags.go:64] FLAG: --registry-qps="5" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555359 4520 flags.go:64] FLAG: --reserved-cpus="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555362 4520 flags.go:64] FLAG: --reserved-memory="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555366 4520 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555370 4520 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555374 4520 flags.go:64] FLAG: --rotate-certificates="false" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555379 4520 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555383 4520 flags.go:64] FLAG: --runonce="false" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555387 4520 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555391 4520 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555396 4520 flags.go:64] FLAG: --seccomp-default="false" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555400 4520 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555403 4520 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555408 4520 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555411 4520 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555415 4520 flags.go:64] FLAG: --storage-driver-password="root" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555418 4520 flags.go:64] FLAG: --storage-driver-secure="false" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555422 4520 flags.go:64] FLAG: --storage-driver-table="stats" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555425 4520 flags.go:64] FLAG: --storage-driver-user="root" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555429 4520 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555433 4520 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555437 4520 flags.go:64] FLAG: --system-cgroups="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555441 4520 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555451 4520 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555455 4520 flags.go:64] FLAG: --tls-cert-file="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555458 4520 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555463 4520 flags.go:64] FLAG: --tls-min-version="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555466 4520 flags.go:64] FLAG: --tls-private-key-file="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555469 4520 flags.go:64] FLAG: --topology-manager-policy="none" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555472 4520 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555476 4520 flags.go:64] FLAG: --topology-manager-scope="container" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555479 4520 flags.go:64] FLAG: --v="2" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555484 4520 flags.go:64] FLAG: --version="false" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555488 4520 flags.go:64] FLAG: --vmodule="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555492 4520 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555495 4520 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555608 4520 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555613 4520 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555616 4520 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555620 4520 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555623 4520 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555626 4520 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555629 4520 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555632 4520 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555635 4520 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555638 4520 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555642 4520 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555645 4520 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555649 4520 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555652 4520 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555655 4520 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555658 4520 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555661 4520 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555664 4520 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555667 4520 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555670 4520 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555673 4520 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555676 4520 feature_gate.go:330] unrecognized feature gate: Example Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555679 4520 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555695 4520 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555699 4520 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555703 4520 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555706 4520 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555710 4520 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555713 4520 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555716 4520 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555719 4520 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555722 4520 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555726 4520 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555729 4520 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555732 4520 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555735 4520 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555739 4520 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555743 4520 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555746 4520 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555750 4520 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555756 4520 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555759 4520 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555768 4520 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555772 4520 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555776 4520 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555781 4520 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555786 4520 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555791 4520 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555796 4520 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555800 4520 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555803 4520 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555807 4520 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555811 4520 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555814 4520 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555817 4520 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555820 4520 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555823 4520 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555826 4520 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555829 4520 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555837 4520 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555840 4520 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555843 4520 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555846 4520 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555849 4520 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555853 4520 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555856 4520 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555859 4520 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555862 4520 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555865 4520 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555868 4520 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.555871 4520 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.555877 4520 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.563380 4520 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.563411 4520 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563483 4520 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563498 4520 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563502 4520 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563506 4520 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563522 4520 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563528 4520 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563535 4520 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563542 4520 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563547 4520 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563551 4520 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563555 4520 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563558 4520 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563561 4520 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563565 4520 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563569 4520 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563572 4520 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563575 4520 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563579 4520 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563582 4520 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563586 4520 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563592 4520 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563596 4520 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563599 4520 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563603 4520 feature_gate.go:330] unrecognized feature gate: Example Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563606 4520 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563610 4520 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563614 4520 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563617 4520 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563620 4520 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563623 4520 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563626 4520 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563630 4520 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563634 4520 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563638 4520 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563643 4520 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563646 4520 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563650 4520 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563653 4520 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563657 4520 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563661 4520 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563666 4520 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563671 4520 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563674 4520 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563679 4520 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563688 4520 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563692 4520 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563695 4520 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563699 4520 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563703 4520 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563706 4520 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563711 4520 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563715 4520 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563721 4520 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563725 4520 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563729 4520 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563732 4520 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563735 4520 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563738 4520 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563741 4520 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563745 4520 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563749 4520 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563752 4520 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563755 4520 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563757 4520 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563760 4520 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563767 4520 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563770 4520 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563774 4520 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563776 4520 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563779 4520 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563783 4520 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.563789 4520 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563881 4520 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563888 4520 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563892 4520 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563897 4520 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563900 4520 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563904 4520 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563907 4520 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563911 4520 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563915 4520 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563918 4520 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563922 4520 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563925 4520 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563928 4520 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563931 4520 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563934 4520 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563937 4520 feature_gate.go:330] unrecognized feature gate: Example Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563939 4520 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563942 4520 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563945 4520 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563948 4520 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563951 4520 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563954 4520 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563957 4520 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563960 4520 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563963 4520 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563966 4520 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563971 4520 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563974 4520 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563978 4520 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563982 4520 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563986 4520 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563989 4520 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563992 4520 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.563998 4520 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.564002 4520 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.564006 4520 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.564010 4520 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.564013 4520 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.564017 4520 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.564020 4520 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.564024 4520 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.564027 4520 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.564030 4520 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.564033 4520 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.564037 4520 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.564040 4520 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.564045 4520 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.564049 4520 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.564053 4520 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.564057 4520 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.564061 4520 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.564064 4520 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.564068 4520 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.564071 4520 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.564075 4520 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.564080 4520 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.564084 4520 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.564087 4520 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.564090 4520 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.564094 4520 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.564097 4520 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.564100 4520 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.564105 4520 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.564108 4520 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.564111 4520 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.564115 4520 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.564119 4520 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.564122 4520 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.564126 4520 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.564130 4520 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.564135 4520 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.564141 4520 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.564273 4520 server.go:940] "Client rotation is on, will bootstrap in background" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.566947 4520 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.567025 4520 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.567852 4520 server.go:997] "Starting client certificate rotation" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.567875 4520 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.568033 4520 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-28 07:04:43.486003921 +0000 UTC Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.568098 4520 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.582638 4520 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 06:44:46 crc kubenswrapper[4520]: E0130 06:44:46.584484 4520 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.25.87:6443: connect: connection refused" logger="UnhandledError" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.585086 4520 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.596188 4520 log.go:25] "Validated CRI v1 runtime API" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.613294 4520 log.go:25] "Validated CRI v1 image API" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.614549 4520 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.616999 4520 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-30-06-41-19-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.617029 4520 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:49 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/containers/storage/overlay-containers/75d81934760b26101869fbd8e4b5954c62b019c1cc3e5a0c9f82ed8de46b3b22/userdata/shm:{mountpoint:/var/lib/containers/storage/overlay-containers/75d81934760b26101869fbd8e4b5954c62b019c1cc3e5a0c9f82ed8de46b3b22/userdata/shm major:0 minor:42 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:50 fsType:tmpfs blockSize:0} overlay_0-43:{mountpoint:/var/lib/containers/storage/overlay/94b752e0a51c0134b00ddef6dc7a933a9d7c1d9bdc88a18dae4192a0d557d623/merged major:0 minor:43 fsType:overlay blockSize:0}] Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.626796 4520 manager.go:217] Machine: {Timestamp:2026-01-30 06:44:46.625309214 +0000 UTC m=+0.253661416 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2445406 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:4674bc25-0afd-48cd-9644-935726ab41fb BootID:28bb964a-9c71-4787-ad40-4262dd439958 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/var/lib/containers/storage/overlay-containers/75d81934760b26101869fbd8e4b5954c62b019c1cc3e5a0c9f82ed8de46b3b22/userdata/shm DeviceMajor:0 DeviceMinor:42 Capacity:65536000 Type:vfs Inodes:3076108 HasInodes:true} {Device:overlay_0-43 DeviceMajor:0 DeviceMinor:43 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:49 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:50 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:8b:b8:02 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:enp3s0 MacAddress:fa:16:3e:8b:b8:02 Speed:-1 Mtu:1500} {Name:enp7s0 MacAddress:fa:16:3e:6e:88:16 Speed:-1 Mtu:1440} {Name:enp7s0.20 MacAddress:52:54:00:d1:9e:39 Speed:-1 Mtu:1436} {Name:enp7s0.21 MacAddress:52:54:00:d7:ab:26 Speed:-1 Mtu:1436} {Name:enp7s0.22 MacAddress:52:54:00:df:a1:6c Speed:-1 Mtu:1436} {Name:eth10 MacAddress:1a:b2:64:52:42:6f Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:06:25:74:44:9c:c3 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:65536 Type:Data Level:1} {Id:0 Size:65536 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:65536 Type:Data Level:1} {Id:1 Size:65536 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:65536 Type:Data Level:1} {Id:2 Size:65536 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:65536 Type:Data Level:1} {Id:3 Size:65536 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:65536 Type:Data Level:1} {Id:4 Size:65536 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:65536 Type:Data Level:1} {Id:5 Size:65536 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:65536 Type:Data Level:1} {Id:6 Size:65536 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:65536 Type:Data Level:1} {Id:7 Size:65536 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.626962 4520 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.627044 4520 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.627598 4520 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.627762 4520 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.627794 4520 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.627957 4520 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.627966 4520 container_manager_linux.go:303] "Creating device plugin manager" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.628263 4520 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.628285 4520 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.628578 4520 state_mem.go:36] "Initialized new in-memory state store" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.628648 4520 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.631122 4520 kubelet.go:418] "Attempting to sync node with API server" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.631150 4520 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.631194 4520 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.631206 4520 kubelet.go:324] "Adding apiserver pod source" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.631217 4520 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.634410 4520 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.25.87:6443: connect: connection refused Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.634413 4520 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.25.87:6443: connect: connection refused Jan 30 06:44:46 crc kubenswrapper[4520]: E0130 06:44:46.634498 4520 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 192.168.25.87:6443: connect: connection refused" logger="UnhandledError" Jan 30 06:44:46 crc kubenswrapper[4520]: E0130 06:44:46.634528 4520 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.25.87:6443: connect: connection refused" logger="UnhandledError" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.635175 4520 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.636238 4520 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.637201 4520 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.638074 4520 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.638098 4520 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.638107 4520 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.638113 4520 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.638125 4520 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.638133 4520 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.638147 4520 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.638158 4520 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.638167 4520 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.638174 4520 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.638184 4520 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.638191 4520 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.638790 4520 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.639141 4520 server.go:1280] "Started kubelet" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.639561 4520 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.639568 4520 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.640165 4520 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 06:44:46 crc systemd[1]: Started Kubernetes Kubelet. Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.641058 4520 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 192.168.25.87:6443: connect: connection refused Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.642882 4520 server.go:460] "Adding debug handlers to kubelet server" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.643643 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.643894 4520 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.644990 4520 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.645005 4520 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 06:44:46 crc kubenswrapper[4520]: E0130 06:44:46.645105 4520 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 06:44:46 crc kubenswrapper[4520]: E0130 06:44:46.645314 4520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.87:6443: connect: connection refused" interval="200ms" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.643999 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 15:01:10.101620773 +0000 UTC Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.645380 4520 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.645628 4520 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.25.87:6443: connect: connection refused Jan 30 06:44:46 crc kubenswrapper[4520]: E0130 06:44:46.645672 4520 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.25.87:6443: connect: connection refused" logger="UnhandledError" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.647433 4520 factory.go:55] Registering systemd factory Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.647460 4520 factory.go:221] Registration of the systemd container factory successfully Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.648048 4520 factory.go:153] Registering CRI-O factory Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.648121 4520 factory.go:221] Registration of the crio container factory successfully Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.648227 4520 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.648312 4520 factory.go:103] Registering Raw factory Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.648377 4520 manager.go:1196] Started watching for new ooms in manager Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.649072 4520 manager.go:319] Starting recovery of all containers Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650404 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650439 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650451 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650461 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650470 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650478 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650487 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650496 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650506 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650531 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650540 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650548 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650556 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650567 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650590 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650598 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650606 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650616 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650626 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650634 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650645 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650654 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650663 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650670 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650678 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650700 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650710 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650721 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650728 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650739 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650762 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650771 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650779 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650787 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650795 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650802 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650811 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650820 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650829 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650837 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.650845 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.651897 4520 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653009 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653057 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653081 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653092 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653105 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653118 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653128 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653140 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653151 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653162 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653174 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653194 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653205 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653220 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653235 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653246 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653258 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653270 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653280 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653295 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653305 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653317 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653325 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653334 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653346 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653355 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653366 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653375 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653384 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653395 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653404 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653413 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653429 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653439 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653451 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653460 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653470 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653482 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653492 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653505 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653531 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653542 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653553 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653562 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653574 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653582 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653592 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653603 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653613 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653623 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653663 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653702 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653715 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653725 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653737 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653746 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653758 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653771 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653780 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653789 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653801 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653809 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653820 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653872 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653885 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653897 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653972 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.653987 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654001 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654013 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654028 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654091 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654103 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654116 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654125 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654137 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654145 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654202 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654214 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654221 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654229 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654240 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654299 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654310 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654319 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654327 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654338 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654346 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654379 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654387 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654395 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654405 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654415 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654480 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654490 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654499 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654527 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654536 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654547 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654609 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654619 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654630 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654638 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654649 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654721 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654731 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654742 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654751 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654760 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654772 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654782 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654795 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654804 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654861 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654872 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654883 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654894 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654902 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654967 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654977 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654985 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.654996 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.655006 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.655016 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.655068 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.655076 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.655087 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.655096 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.655105 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: E0130 06:44:46.648420 4520 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 192.168.25.87:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f6f3d106c5ef6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 06:44:46.639120118 +0000 UTC m=+0.267472299,LastTimestamp:2026-01-30 06:44:46.639120118 +0000 UTC m=+0.267472299,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.655171 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.655180 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.655192 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.655201 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.655209 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.655221 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.655299 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.655332 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.655341 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.655351 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.655362 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.655371 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.655380 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.655392 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.655401 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.655411 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.655493 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.655502 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.655688 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.655703 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.656423 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.656468 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.656480 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.656492 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.656510 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.656537 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.656552 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.656564 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.656576 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.656593 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.656605 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.656615 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.656628 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.656638 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.656651 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.656662 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.656673 4520 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.656694 4520 reconstruct.go:97] "Volume reconstruction finished" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.656702 4520 reconciler.go:26] "Reconciler: start to sync state" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.671965 4520 manager.go:324] Recovery completed Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.681792 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.682279 4520 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.683079 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.683116 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.683127 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.683962 4520 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.684036 4520 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.684091 4520 state_mem.go:36] "Initialized new in-memory state store" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.684354 4520 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.684388 4520 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.684415 4520 kubelet.go:2335] "Starting kubelet main sync loop" Jan 30 06:44:46 crc kubenswrapper[4520]: E0130 06:44:46.684454 4520 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 06:44:46 crc kubenswrapper[4520]: W0130 06:44:46.685972 4520 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.25.87:6443: connect: connection refused Jan 30 06:44:46 crc kubenswrapper[4520]: E0130 06:44:46.686017 4520 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.25.87:6443: connect: connection refused" logger="UnhandledError" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.690752 4520 policy_none.go:49] "None policy: Start" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.692007 4520 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.692032 4520 state_mem.go:35] "Initializing new in-memory state store" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.736247 4520 manager.go:334] "Starting Device Plugin manager" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.736295 4520 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.736310 4520 server.go:79] "Starting device plugin registration server" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.736644 4520 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.736668 4520 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.736901 4520 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.737000 4520 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.737013 4520 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 06:44:46 crc kubenswrapper[4520]: E0130 06:44:46.745168 4520 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.784861 4520 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.784961 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.785635 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.785691 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.785703 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.785847 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.785947 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.785981 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.786549 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.786557 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.786566 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.786573 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.786576 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.786582 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.786645 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.786784 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.786803 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.787087 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.787104 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.787111 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.787183 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.787222 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.787241 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.787248 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.787472 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.787504 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.787596 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.787687 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.787697 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.787770 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.787966 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.787986 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.787993 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.788392 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.788425 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.788638 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.788655 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.788662 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.788767 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.788791 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.789413 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.789431 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.789441 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.789923 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.789974 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.789987 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.837482 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.839884 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.839937 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.839948 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.839984 4520 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 06:44:46 crc kubenswrapper[4520]: E0130 06:44:46.840357 4520 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.25.87:6443: connect: connection refused" node="crc" Jan 30 06:44:46 crc kubenswrapper[4520]: E0130 06:44:46.845727 4520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.87:6443: connect: connection refused" interval="400ms" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.858487 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.858536 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.858560 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.858582 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.858638 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.858694 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.858728 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.858793 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.858814 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.858829 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.858846 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.858863 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.858918 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.858938 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.858953 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.959612 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.959642 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.959661 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.959683 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.959697 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.959710 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.959723 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.959736 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.959757 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.959772 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.959784 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.959801 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.959813 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.959826 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.959840 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.960061 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.960090 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.960113 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.960132 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.960139 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.960159 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.960160 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.960178 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.960194 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.960200 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.960225 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.960234 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.960264 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.960266 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 06:44:46 crc kubenswrapper[4520]: I0130 06:44:46.960283 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.040476 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.041646 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.041664 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.041684 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.041718 4520 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 06:44:47 crc kubenswrapper[4520]: E0130 06:44:47.042038 4520 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.25.87:6443: connect: connection refused" node="crc" Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.112448 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.117285 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 06:44:47 crc kubenswrapper[4520]: W0130 06:44:47.133611 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-b394d1b4de6b8653876a858eef25d93a7b7831ca31ff8c4273874f7feaa7511d WatchSource:0}: Error finding container b394d1b4de6b8653876a858eef25d93a7b7831ca31ff8c4273874f7feaa7511d: Status 404 returned error can't find the container with id b394d1b4de6b8653876a858eef25d93a7b7831ca31ff8c4273874f7feaa7511d Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.136224 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 06:44:47 crc kubenswrapper[4520]: W0130 06:44:47.136530 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-55f3980ccc87e474525d116cf9fc73e3b5a755d1f239e3e598bdc96d1ed06e74 WatchSource:0}: Error finding container 55f3980ccc87e474525d116cf9fc73e3b5a755d1f239e3e598bdc96d1ed06e74: Status 404 returned error can't find the container with id 55f3980ccc87e474525d116cf9fc73e3b5a755d1f239e3e598bdc96d1ed06e74 Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.148324 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 06:44:47 crc kubenswrapper[4520]: W0130 06:44:47.149463 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-ace7731c8c3d3bf5e6bbafe4c3e4a1eaf7b2d6f1d33b9108922a15b0c03292bc WatchSource:0}: Error finding container ace7731c8c3d3bf5e6bbafe4c3e4a1eaf7b2d6f1d33b9108922a15b0c03292bc: Status 404 returned error can't find the container with id ace7731c8c3d3bf5e6bbafe4c3e4a1eaf7b2d6f1d33b9108922a15b0c03292bc Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.152531 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:44:47 crc kubenswrapper[4520]: W0130 06:44:47.165405 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-ba6d1726e445c20a50f57d74f5d21c114ac80e4881d0a651b26370f01d6bea6c WatchSource:0}: Error finding container ba6d1726e445c20a50f57d74f5d21c114ac80e4881d0a651b26370f01d6bea6c: Status 404 returned error can't find the container with id ba6d1726e445c20a50f57d74f5d21c114ac80e4881d0a651b26370f01d6bea6c Jan 30 06:44:47 crc kubenswrapper[4520]: E0130 06:44:47.246494 4520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.87:6443: connect: connection refused" interval="800ms" Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.442808 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.444080 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.444116 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.444127 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.444154 4520 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 06:44:47 crc kubenswrapper[4520]: E0130 06:44:47.444452 4520 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.25.87:6443: connect: connection refused" node="crc" Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.641734 4520 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 192.168.25.87:6443: connect: connection refused Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.645871 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 09:53:03.129555445 +0000 UTC Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.689146 4520 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec" exitCode=0 Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.689282 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec"} Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.689549 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"23205fa344a7e88332f719823b82b7397f0a03a383ca4efc2b5ba555d2a004bb"} Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.689725 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.691029 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.691060 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.691071 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.692113 4520 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="eb4b80eaa5a81e0a2545293c9e5b5511d1385569c85e0ad7804758bae1725473" exitCode=0 Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.692146 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"eb4b80eaa5a81e0a2545293c9e5b5511d1385569c85e0ad7804758bae1725473"} Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.692186 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"ace7731c8c3d3bf5e6bbafe4c3e4a1eaf7b2d6f1d33b9108922a15b0c03292bc"} Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.692274 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.693320 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.693342 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.693354 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.697217 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f"} Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.697263 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"55f3980ccc87e474525d116cf9fc73e3b5a755d1f239e3e598bdc96d1ed06e74"} Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.698393 4520 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="7e33b3a1734c6dbfb28a8708410e6b63edaaa276054ebb52e1ae99efdeeb2cf1" exitCode=0 Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.698444 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"7e33b3a1734c6dbfb28a8708410e6b63edaaa276054ebb52e1ae99efdeeb2cf1"} Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.698463 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b394d1b4de6b8653876a858eef25d93a7b7831ca31ff8c4273874f7feaa7511d"} Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.698549 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.699182 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.699240 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.699251 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.700347 4520 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2" exitCode=0 Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.700374 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2"} Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.700391 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ba6d1726e445c20a50f57d74f5d21c114ac80e4881d0a651b26370f01d6bea6c"} Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.700458 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.701000 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.701018 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.701026 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.702364 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.703062 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.703086 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:47 crc kubenswrapper[4520]: I0130 06:44:47.703095 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:47 crc kubenswrapper[4520]: W0130 06:44:47.925166 4520 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.25.87:6443: connect: connection refused Jan 30 06:44:47 crc kubenswrapper[4520]: E0130 06:44:47.925403 4520 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 192.168.25.87:6443: connect: connection refused" logger="UnhandledError" Jan 30 06:44:47 crc kubenswrapper[4520]: W0130 06:44:47.962569 4520 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.25.87:6443: connect: connection refused Jan 30 06:44:47 crc kubenswrapper[4520]: E0130 06:44:47.962706 4520 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.25.87:6443: connect: connection refused" logger="UnhandledError" Jan 30 06:44:48 crc kubenswrapper[4520]: E0130 06:44:48.047014 4520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.87:6443: connect: connection refused" interval="1.6s" Jan 30 06:44:48 crc kubenswrapper[4520]: W0130 06:44:48.226754 4520 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.25.87:6443: connect: connection refused Jan 30 06:44:48 crc kubenswrapper[4520]: E0130 06:44:48.226845 4520 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.25.87:6443: connect: connection refused" logger="UnhandledError" Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.245406 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.246525 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.246565 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.246574 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.246602 4520 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 06:44:48 crc kubenswrapper[4520]: E0130 06:44:48.247004 4520 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.25.87:6443: connect: connection refused" node="crc" Jan 30 06:44:48 crc kubenswrapper[4520]: W0130 06:44:48.285977 4520 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.25.87:6443: connect: connection refused Jan 30 06:44:48 crc kubenswrapper[4520]: E0130 06:44:48.286026 4520 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.25.87:6443: connect: connection refused" logger="UnhandledError" Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.593031 4520 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.646030 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 13:05:48.151012648 +0000 UTC Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.705196 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89"} Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.705233 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370"} Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.705244 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0"} Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.705252 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507"} Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.705261 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333"} Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.705353 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.705981 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.706003 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.706010 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.707712 4520 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2" exitCode=0 Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.707755 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2"} Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.707832 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.708403 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.708421 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.708429 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.710702 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"5785142c6cf161b6452de8efa5caafe1bd42705e2454274648f552108de7c84b"} Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.710758 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.711733 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.711772 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.711781 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.713336 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974"} Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.713360 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97"} Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.713372 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4"} Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.713419 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.714051 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.714069 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.714077 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.715910 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ffb071ac9d3d42a711e23a6868eca346b62b7f4802226ed4283e895c1db00216"} Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.715940 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f7cfdbf2ac64a3089a349ad033770210d594956c8395afe2b65ece4cd9a234b4"} Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.715951 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"20f365e319337b1d1c71d80b5631c2264c907a4b8c06d78c1e1c2ed64915fdfb"} Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.716025 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.716574 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.716594 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:48 crc kubenswrapper[4520]: I0130 06:44:48.716603 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:49 crc kubenswrapper[4520]: I0130 06:44:49.070507 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:44:49 crc kubenswrapper[4520]: I0130 06:44:49.646864 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 06:21:08.651216496 +0000 UTC Jan 30 06:44:49 crc kubenswrapper[4520]: I0130 06:44:49.720416 4520 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434" exitCode=0 Jan 30 06:44:49 crc kubenswrapper[4520]: I0130 06:44:49.720550 4520 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 06:44:49 crc kubenswrapper[4520]: I0130 06:44:49.720584 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:49 crc kubenswrapper[4520]: I0130 06:44:49.720963 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434"} Jan 30 06:44:49 crc kubenswrapper[4520]: I0130 06:44:49.721048 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:49 crc kubenswrapper[4520]: I0130 06:44:49.721242 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:49 crc kubenswrapper[4520]: I0130 06:44:49.721498 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:49 crc kubenswrapper[4520]: I0130 06:44:49.721544 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:49 crc kubenswrapper[4520]: I0130 06:44:49.721557 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:49 crc kubenswrapper[4520]: I0130 06:44:49.721667 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:49 crc kubenswrapper[4520]: I0130 06:44:49.721712 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:49 crc kubenswrapper[4520]: I0130 06:44:49.721722 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:49 crc kubenswrapper[4520]: I0130 06:44:49.721792 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:49 crc kubenswrapper[4520]: I0130 06:44:49.721814 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:49 crc kubenswrapper[4520]: I0130 06:44:49.721822 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:49 crc kubenswrapper[4520]: I0130 06:44:49.847292 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:49 crc kubenswrapper[4520]: I0130 06:44:49.847984 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:49 crc kubenswrapper[4520]: I0130 06:44:49.848029 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:49 crc kubenswrapper[4520]: I0130 06:44:49.848038 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:49 crc kubenswrapper[4520]: I0130 06:44:49.848057 4520 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 06:44:50 crc kubenswrapper[4520]: I0130 06:44:50.211011 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 06:44:50 crc kubenswrapper[4520]: I0130 06:44:50.214718 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 06:44:50 crc kubenswrapper[4520]: I0130 06:44:50.647601 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 08:13:40.170833774 +0000 UTC Jan 30 06:44:50 crc kubenswrapper[4520]: I0130 06:44:50.727535 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892"} Jan 30 06:44:50 crc kubenswrapper[4520]: I0130 06:44:50.727611 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d"} Jan 30 06:44:50 crc kubenswrapper[4520]: I0130 06:44:50.727629 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc"} Jan 30 06:44:50 crc kubenswrapper[4520]: I0130 06:44:50.727639 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83"} Jan 30 06:44:50 crc kubenswrapper[4520]: I0130 06:44:50.727641 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:50 crc kubenswrapper[4520]: I0130 06:44:50.727649 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff"} Jan 30 06:44:50 crc kubenswrapper[4520]: I0130 06:44:50.727831 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:50 crc kubenswrapper[4520]: I0130 06:44:50.728606 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:50 crc kubenswrapper[4520]: I0130 06:44:50.728646 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:50 crc kubenswrapper[4520]: I0130 06:44:50.728658 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:50 crc kubenswrapper[4520]: I0130 06:44:50.729467 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:50 crc kubenswrapper[4520]: I0130 06:44:50.729492 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:50 crc kubenswrapper[4520]: I0130 06:44:50.729504 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:51 crc kubenswrapper[4520]: I0130 06:44:51.647899 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 23:01:17.102354954 +0000 UTC Jan 30 06:44:51 crc kubenswrapper[4520]: I0130 06:44:51.729493 4520 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 06:44:51 crc kubenswrapper[4520]: I0130 06:44:51.729530 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:51 crc kubenswrapper[4520]: I0130 06:44:51.729545 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:51 crc kubenswrapper[4520]: I0130 06:44:51.730172 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:51 crc kubenswrapper[4520]: I0130 06:44:51.730193 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:51 crc kubenswrapper[4520]: I0130 06:44:51.730203 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:51 crc kubenswrapper[4520]: I0130 06:44:51.730234 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:51 crc kubenswrapper[4520]: I0130 06:44:51.730248 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:51 crc kubenswrapper[4520]: I0130 06:44:51.730255 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:52 crc kubenswrapper[4520]: I0130 06:44:52.648226 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 16:40:28.404479032 +0000 UTC Jan 30 06:44:52 crc kubenswrapper[4520]: I0130 06:44:52.726753 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:44:52 crc kubenswrapper[4520]: I0130 06:44:52.726902 4520 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 06:44:52 crc kubenswrapper[4520]: I0130 06:44:52.726935 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:52 crc kubenswrapper[4520]: I0130 06:44:52.727807 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:52 crc kubenswrapper[4520]: I0130 06:44:52.727849 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:52 crc kubenswrapper[4520]: I0130 06:44:52.727859 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:53 crc kubenswrapper[4520]: I0130 06:44:53.025939 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 06:44:53 crc kubenswrapper[4520]: I0130 06:44:53.026035 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:53 crc kubenswrapper[4520]: I0130 06:44:53.026724 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:53 crc kubenswrapper[4520]: I0130 06:44:53.026756 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:53 crc kubenswrapper[4520]: I0130 06:44:53.026765 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:53 crc kubenswrapper[4520]: I0130 06:44:53.648330 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 12:36:37.339788971 +0000 UTC Jan 30 06:44:54 crc kubenswrapper[4520]: I0130 06:44:54.248392 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:44:54 crc kubenswrapper[4520]: I0130 06:44:54.248495 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:54 crc kubenswrapper[4520]: I0130 06:44:54.249363 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:54 crc kubenswrapper[4520]: I0130 06:44:54.249393 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:54 crc kubenswrapper[4520]: I0130 06:44:54.249402 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:54 crc kubenswrapper[4520]: I0130 06:44:54.413621 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 30 06:44:54 crc kubenswrapper[4520]: I0130 06:44:54.413717 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:54 crc kubenswrapper[4520]: I0130 06:44:54.414466 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:54 crc kubenswrapper[4520]: I0130 06:44:54.414499 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:54 crc kubenswrapper[4520]: I0130 06:44:54.414511 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:54 crc kubenswrapper[4520]: I0130 06:44:54.634971 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 06:44:54 crc kubenswrapper[4520]: I0130 06:44:54.635127 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:54 crc kubenswrapper[4520]: I0130 06:44:54.635675 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:54 crc kubenswrapper[4520]: I0130 06:44:54.635702 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:54 crc kubenswrapper[4520]: I0130 06:44:54.635712 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:54 crc kubenswrapper[4520]: I0130 06:44:54.649300 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 23:10:04.394349225 +0000 UTC Jan 30 06:44:54 crc kubenswrapper[4520]: I0130 06:44:54.969413 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 06:44:54 crc kubenswrapper[4520]: I0130 06:44:54.969510 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:54 crc kubenswrapper[4520]: I0130 06:44:54.970035 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:54 crc kubenswrapper[4520]: I0130 06:44:54.970062 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:54 crc kubenswrapper[4520]: I0130 06:44:54.970071 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:55 crc kubenswrapper[4520]: I0130 06:44:55.649664 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 04:54:59.879900173 +0000 UTC Jan 30 06:44:56 crc kubenswrapper[4520]: I0130 06:44:56.190308 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 30 06:44:56 crc kubenswrapper[4520]: I0130 06:44:56.190499 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:56 crc kubenswrapper[4520]: I0130 06:44:56.191337 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:56 crc kubenswrapper[4520]: I0130 06:44:56.191373 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:56 crc kubenswrapper[4520]: I0130 06:44:56.191383 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:56 crc kubenswrapper[4520]: I0130 06:44:56.649735 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 08:03:38.588305243 +0000 UTC Jan 30 06:44:56 crc kubenswrapper[4520]: E0130 06:44:56.746080 4520 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 06:44:57 crc kubenswrapper[4520]: I0130 06:44:57.650113 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 06:15:33.125783663 +0000 UTC Jan 30 06:44:57 crc kubenswrapper[4520]: I0130 06:44:57.708748 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 06:44:57 crc kubenswrapper[4520]: I0130 06:44:57.708869 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:57 crc kubenswrapper[4520]: I0130 06:44:57.709869 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:57 crc kubenswrapper[4520]: I0130 06:44:57.709892 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:57 crc kubenswrapper[4520]: I0130 06:44:57.709900 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:57 crc kubenswrapper[4520]: I0130 06:44:57.712569 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 06:44:57 crc kubenswrapper[4520]: I0130 06:44:57.739816 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:44:57 crc kubenswrapper[4520]: I0130 06:44:57.740276 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:44:57 crc kubenswrapper[4520]: I0130 06:44:57.740355 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:44:57 crc kubenswrapper[4520]: I0130 06:44:57.740418 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:44:57 crc kubenswrapper[4520]: I0130 06:44:57.969996 4520 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 06:44:57 crc kubenswrapper[4520]: I0130 06:44:57.970039 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 06:44:58 crc kubenswrapper[4520]: E0130 06:44:58.594557 4520 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 30 06:44:58 crc kubenswrapper[4520]: I0130 06:44:58.642208 4520 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 30 06:44:58 crc kubenswrapper[4520]: I0130 06:44:58.650579 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 04:47:56.849690206 +0000 UTC Jan 30 06:44:59 crc kubenswrapper[4520]: I0130 06:44:59.070607 4520 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 06:44:59 crc kubenswrapper[4520]: I0130 06:44:59.070738 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 06:44:59 crc kubenswrapper[4520]: I0130 06:44:59.115683 4520 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 06:44:59 crc kubenswrapper[4520]: I0130 06:44:59.115711 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 06:44:59 crc kubenswrapper[4520]: I0130 06:44:59.650944 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 17:36:55.230960754 +0000 UTC Jan 30 06:45:00 crc kubenswrapper[4520]: I0130 06:45:00.651040 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 07:46:20.439736339 +0000 UTC Jan 30 06:45:01 crc kubenswrapper[4520]: I0130 06:45:01.651652 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 14:01:21.412000043 +0000 UTC Jan 30 06:45:02 crc kubenswrapper[4520]: I0130 06:45:02.652271 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 01:08:23.766502844 +0000 UTC Jan 30 06:45:02 crc kubenswrapper[4520]: I0130 06:45:02.720854 4520 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 06:45:02 crc kubenswrapper[4520]: I0130 06:45:02.733256 4520 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 06:45:03 crc kubenswrapper[4520]: I0130 06:45:03.653214 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 03:06:29.173307098 +0000 UTC Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.075270 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.075428 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.076205 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.076231 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.076241 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.078801 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:45:04 crc kubenswrapper[4520]: E0130 06:45:04.148178 4520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 30 06:45:04 crc kubenswrapper[4520]: E0130 06:45:04.150606 4520 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.150920 4520 trace.go:236] Trace[115617218]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 06:44:50.556) (total time: 13594ms): Jan 30 06:45:04 crc kubenswrapper[4520]: Trace[115617218]: ---"Objects listed" error: 13593ms (06:45:04.150) Jan 30 06:45:04 crc kubenswrapper[4520]: Trace[115617218]: [13.594004157s] [13.594004157s] END Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.150941 4520 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.151370 4520 trace.go:236] Trace[393174072]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 06:44:49.735) (total time: 14416ms): Jan 30 06:45:04 crc kubenswrapper[4520]: Trace[393174072]: ---"Objects listed" error: 14416ms (06:45:04.151) Jan 30 06:45:04 crc kubenswrapper[4520]: Trace[393174072]: [14.41625498s] [14.41625498s] END Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.151385 4520 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.152625 4520 trace.go:236] Trace[1555757436]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 06:44:50.701) (total time: 13451ms): Jan 30 06:45:04 crc kubenswrapper[4520]: Trace[1555757436]: ---"Objects listed" error: 13451ms (06:45:04.152) Jan 30 06:45:04 crc kubenswrapper[4520]: Trace[1555757436]: [13.451505986s] [13.451505986s] END Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.152643 4520 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.157022 4520 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.157103 4520 trace.go:236] Trace[1425087357]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 06:44:51.420) (total time: 12736ms): Jan 30 06:45:04 crc kubenswrapper[4520]: Trace[1425087357]: ---"Objects listed" error: 12736ms (06:45:04.157) Jan 30 06:45:04 crc kubenswrapper[4520]: Trace[1425087357]: [12.736784649s] [12.736784649s] END Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.157120 4520 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.239211 4520 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:48924->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.239260 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:48924->192.168.126.11:17697: read: connection reset by peer" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.239212 4520 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:48926->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.239345 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:48926->192.168.126.11:17697: read: connection reset by peer" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.249134 4520 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.249171 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.640189 4520 apiserver.go:52] "Watching apiserver" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.644190 4520 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.644411 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.644769 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.644769 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.644809 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.644890 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 06:45:04 crc kubenswrapper[4520]: E0130 06:45:04.644978 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:45:04 crc kubenswrapper[4520]: E0130 06:45:04.645014 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.645029 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.645261 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:04 crc kubenswrapper[4520]: E0130 06:45:04.645297 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.645873 4520 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.647660 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.647723 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.648012 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.648027 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.648018 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.648253 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.649215 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.649221 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.650749 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.653349 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 15:48:56.129978821 +0000 UTC Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.658410 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.658438 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.658526 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.658559 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.658576 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.658589 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.658619 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.658634 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.658648 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.658665 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.658679 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.658693 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.658709 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.658724 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.658751 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.658742 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.658765 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.658795 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.658812 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.658831 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.658847 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.658862 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.658876 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.658889 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.658902 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.658917 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.658933 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.658947 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.658961 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.658973 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.658990 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659006 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659024 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659040 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659053 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659069 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659084 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659097 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659111 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659125 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659139 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659175 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659191 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659205 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659219 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659231 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659246 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659259 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659272 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659310 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659326 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659341 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659355 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659370 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659387 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659402 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659416 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659429 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659444 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659459 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659472 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659486 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659500 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659526 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659540 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659555 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659572 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659610 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659626 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659641 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659659 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659675 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659693 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659707 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659728 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659742 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659757 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659771 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659786 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659802 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659818 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659835 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659852 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659004 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659875 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659917 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659933 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659949 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659965 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659980 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659996 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660011 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660026 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660041 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660056 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660071 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660086 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660101 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660116 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660131 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660145 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660160 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660202 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660219 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660236 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660252 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660267 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660284 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660299 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660320 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660336 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660353 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660368 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660390 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660405 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660420 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660435 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660450 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660466 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660483 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660692 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660710 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660725 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660739 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660755 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660772 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660788 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660806 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660821 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660836 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660850 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660871 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660886 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660902 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660916 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660932 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660947 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660961 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660976 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660993 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661011 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661042 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661057 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661136 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661155 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661186 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661204 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661221 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661239 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661265 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661284 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661301 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661317 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661334 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661350 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661365 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661382 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661397 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661414 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661432 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661446 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661462 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661476 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661491 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661505 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661542 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661559 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661574 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661590 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661605 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661622 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661638 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661657 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661673 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661689 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661705 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661720 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661734 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661749 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661764 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661780 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661801 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661815 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661832 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661848 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661863 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661878 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661894 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661910 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661925 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661942 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661958 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661974 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661991 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.662007 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.662025 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.662041 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.662068 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.662090 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.662112 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.662151 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.662180 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.662197 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.662214 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.662232 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.662247 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.662262 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.662279 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.662294 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.662311 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.662327 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.662350 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.662360 4520 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659008 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659141 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659265 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659260 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659314 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659386 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659427 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659444 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659504 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659579 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659604 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659737 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659742 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659807 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659816 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659820 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659874 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659945 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.659954 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660149 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660263 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660302 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660309 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660419 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660440 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660454 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660566 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660592 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660595 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660843 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660848 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660975 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.660986 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661123 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661128 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661197 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661263 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661372 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661600 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661609 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661631 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661766 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661810 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661821 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661897 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.661970 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.662046 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.662134 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.662262 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.662821 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.662985 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.673839 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.663008 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.663023 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.663139 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.663152 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.663383 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.663394 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.663612 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.663675 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.673950 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.663713 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.663766 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.663898 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.663974 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.664004 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.664151 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.664202 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.664250 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.664302 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.664438 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.664601 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.664776 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.664719 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.665032 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.665160 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.665224 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.665418 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.665652 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.665751 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.666088 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: E0130 06:45:04.666176 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:45:05.166139818 +0000 UTC m=+18.794491999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.666239 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.666660 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.666941 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.667686 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.668172 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.668602 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.668661 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.668860 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.669062 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.669602 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.669675 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.669751 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.669865 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.669917 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.670019 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.670025 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.670145 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.670220 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.670432 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.670529 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.672776 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.673118 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.673237 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.673510 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.673641 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.674008 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.674028 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.674224 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.674238 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.674292 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.674422 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.674443 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.674577 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.675010 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.675577 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.676365 4520 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.676597 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.676619 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.676696 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.676926 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.676986 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.677047 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.677293 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.677357 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.677632 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.677724 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.677964 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.678404 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.678669 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.678902 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.679186 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.679354 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.679785 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.680238 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.680441 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.681113 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.682017 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.682135 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.682150 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.683297 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.683547 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.686350 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.686395 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.686587 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.686640 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.686638 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.686759 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.686797 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.686865 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.686890 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.687077 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.687109 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: E0130 06:45:04.687113 4520 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 06:45:04 crc kubenswrapper[4520]: E0130 06:45:04.689080 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 06:45:05.189063239 +0000 UTC m=+18.817415421 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 06:45:04 crc kubenswrapper[4520]: E0130 06:45:04.687156 4520 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 06:45:04 crc kubenswrapper[4520]: E0130 06:45:04.689251 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 06:45:05.189241044 +0000 UTC m=+18.817593225 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.687277 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.687457 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.687636 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.688210 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.688478 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.690722 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.690875 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.690598 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.691420 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.691429 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.691674 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: E0130 06:45:04.691966 4520 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 06:45:04 crc kubenswrapper[4520]: E0130 06:45:04.692050 4520 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 06:45:04 crc kubenswrapper[4520]: E0130 06:45:04.692109 4520 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.692216 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: E0130 06:45:04.692293 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 06:45:05.192216638 +0000 UTC m=+18.820568820 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.692423 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.692633 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.699691 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.699824 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.700013 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.700581 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.700690 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.700783 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.701020 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.701108 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.701341 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.701376 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.701580 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.701759 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.702070 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.703232 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.703458 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.703627 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.703813 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.704005 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.700769 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.704504 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.707057 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.707828 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.707868 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.707878 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.707995 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.708062 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.708099 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.708130 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: E0130 06:45:04.708239 4520 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 06:45:04 crc kubenswrapper[4520]: E0130 06:45:04.708264 4520 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 06:45:04 crc kubenswrapper[4520]: E0130 06:45:04.708280 4520 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 06:45:04 crc kubenswrapper[4520]: E0130 06:45:04.708352 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 06:45:05.208340617 +0000 UTC m=+18.836692797 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.708559 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.708735 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.708951 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.709558 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.714122 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.716039 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.718347 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.719047 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.719263 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.719297 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.719351 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.720038 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.721254 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.721927 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.723113 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.723841 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.726006 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.726483 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.726796 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.728353 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.728973 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.729637 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.730303 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.731138 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.731288 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.731698 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.732757 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.733109 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.733660 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.734541 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.734940 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.735766 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.736150 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.737041 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.737465 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.738211 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.738442 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.739955 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.740668 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.741626 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.742063 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.742856 4520 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.742952 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.744417 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.745554 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.745643 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.746031 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.747775 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.748319 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.749096 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.749652 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.750489 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.750887 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.751769 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.752390 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.753190 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.753322 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.753891 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.754910 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.755589 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.756646 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.756775 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.757111 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.758138 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.758733 4520 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89" exitCode=255 Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.758766 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.759537 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.760350 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.761159 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.761902 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89"} Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.762813 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.762840 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.762879 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.762968 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.762992 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763006 4520 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763016 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763025 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763033 4520 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763042 4520 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763050 4520 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763059 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763067 4520 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763075 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763084 4520 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763091 4520 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763102 4520 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763110 4520 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763117 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763124 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763132 4520 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763139 4520 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763147 4520 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763154 4520 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763170 4520 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763178 4520 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763186 4520 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763193 4520 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763204 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763212 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763220 4520 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763227 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763235 4520 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763244 4520 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763252 4520 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763261 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763271 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763279 4520 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763287 4520 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763294 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763316 4520 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763325 4520 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763333 4520 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763447 4520 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763778 4520 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763793 4520 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763803 4520 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763815 4520 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763827 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763837 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763847 4520 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763856 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763865 4520 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763898 4520 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763914 4520 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763923 4520 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763933 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763950 4520 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763958 4520 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763966 4520 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763973 4520 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763983 4520 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.763993 4520 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764002 4520 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764009 4520 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764017 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764027 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764034 4520 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764042 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764050 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764057 4520 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764065 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764072 4520 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764080 4520 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764087 4520 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764096 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764104 4520 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764114 4520 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764121 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764129 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764143 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764152 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764169 4520 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764179 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764187 4520 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764196 4520 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764205 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764215 4520 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764224 4520 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764232 4520 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764241 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764249 4520 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764257 4520 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764267 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764275 4520 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764283 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764290 4520 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764761 4520 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764773 4520 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764784 4520 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764793 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764802 4520 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764811 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764820 4520 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764829 4520 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764837 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764848 4520 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764865 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764874 4520 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764883 4520 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764891 4520 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764917 4520 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764927 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764936 4520 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764948 4520 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764956 4520 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764964 4520 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764972 4520 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764981 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764989 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.764997 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765006 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765014 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765022 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765030 4520 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765039 4520 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765047 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765060 4520 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765068 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765077 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765086 4520 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765096 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765106 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765116 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765124 4520 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765133 4520 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765147 4520 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765157 4520 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765177 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765188 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765198 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765206 4520 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765215 4520 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765224 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765234 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765243 4520 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765252 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765260 4520 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765269 4520 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765278 4520 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765287 4520 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765366 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765378 4520 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765387 4520 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765396 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765405 4520 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765415 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765425 4520 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765435 4520 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765460 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765469 4520 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765478 4520 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765486 4520 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765495 4520 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765504 4520 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765536 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765545 4520 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765553 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765562 4520 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765570 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765578 4520 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765586 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765595 4520 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765603 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765611 4520 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765618 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765626 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765634 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765642 4520 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765650 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765658 4520 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765670 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765678 4520 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765687 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765699 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765708 4520 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765720 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765728 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765736 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765743 4520 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765751 4520 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765759 4520 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765768 4520 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765777 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765786 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.765794 4520 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.766121 4520 scope.go:117] "RemoveContainer" containerID="cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.766620 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.766637 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.774213 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.781948 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.788198 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.795727 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.803219 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.955156 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 06:45:04 crc kubenswrapper[4520]: W0130 06:45:04.963342 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-0196e0ba9507fcf6d476bc76c81570c05859f3794a25adedd0b284567bd2298b WatchSource:0}: Error finding container 0196e0ba9507fcf6d476bc76c81570c05859f3794a25adedd0b284567bd2298b: Status 404 returned error can't find the container with id 0196e0ba9507fcf6d476bc76c81570c05859f3794a25adedd0b284567bd2298b Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.968912 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.973172 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.977836 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.983062 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.985144 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 06:45:04 crc kubenswrapper[4520]: I0130 06:45:04.993692 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.003611 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.022306 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.044895 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.074589 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.091595 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.103553 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.112996 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.122074 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.132396 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.140459 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.148837 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.156347 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.165427 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.169804 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:45:05 crc kubenswrapper[4520]: E0130 06:45:05.170042 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:45:06.170013041 +0000 UTC m=+19.798365222 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.174717 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.270724 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:45:05 crc kubenswrapper[4520]: E0130 06:45:05.271027 4520 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 06:45:05 crc kubenswrapper[4520]: E0130 06:45:05.271058 4520 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 06:45:05 crc kubenswrapper[4520]: E0130 06:45:05.271081 4520 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 06:45:05 crc kubenswrapper[4520]: E0130 06:45:05.271137 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 06:45:06.271119637 +0000 UTC m=+19.899471818 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.271234 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.271316 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.271399 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:05 crc kubenswrapper[4520]: E0130 06:45:05.271470 4520 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 06:45:05 crc kubenswrapper[4520]: E0130 06:45:05.271492 4520 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 06:45:05 crc kubenswrapper[4520]: E0130 06:45:05.271504 4520 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 06:45:05 crc kubenswrapper[4520]: E0130 06:45:05.271551 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 06:45:06.271541381 +0000 UTC m=+19.899893562 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 06:45:05 crc kubenswrapper[4520]: E0130 06:45:05.271616 4520 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 06:45:05 crc kubenswrapper[4520]: E0130 06:45:05.271649 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 06:45:06.271641068 +0000 UTC m=+19.899993249 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 06:45:05 crc kubenswrapper[4520]: E0130 06:45:05.271788 4520 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 06:45:05 crc kubenswrapper[4520]: E0130 06:45:05.271930 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 06:45:06.271911607 +0000 UTC m=+19.900263788 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.653496 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 23:34:03.315358109 +0000 UTC Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.763372 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c"} Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.763427 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a"} Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.763442 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0196e0ba9507fcf6d476bc76c81570c05859f3794a25adedd0b284567bd2298b"} Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.765388 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.767322 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782"} Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.768377 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.769074 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203"} Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.769117 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"6b0a33407f8245336e591fc82c06a4cba919a1ccd2b0956056dd98998f138f32"} Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.770099 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"482719b78326a373439f07bdd8e2a0974fb3644777c3e6bbfe339a9396002012"} Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.777460 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:05Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.788180 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:05Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.801589 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:05Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.812801 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:05Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.824908 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:05Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.834639 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:05Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.844938 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:05Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.858257 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:05Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.870285 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:05Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.882901 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:05Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.892253 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:05Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.903911 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:05Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.916369 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:05Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.925938 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:05Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.935711 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:05Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:05 crc kubenswrapper[4520]: I0130 06:45:05.943666 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:05Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.178848 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:45:06 crc kubenswrapper[4520]: E0130 06:45:06.179059 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:45:08.179029295 +0000 UTC m=+21.807381465 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.211933 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.220900 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.222661 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.225230 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.232973 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.243182 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.251992 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.261245 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.269753 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.278278 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.279426 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.279466 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.279488 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.279507 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:45:06 crc kubenswrapper[4520]: E0130 06:45:06.279601 4520 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 06:45:06 crc kubenswrapper[4520]: E0130 06:45:06.279609 4520 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 06:45:06 crc kubenswrapper[4520]: E0130 06:45:06.279631 4520 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 06:45:06 crc kubenswrapper[4520]: E0130 06:45:06.279647 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 06:45:08.27963548 +0000 UTC m=+21.907987661 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 06:45:06 crc kubenswrapper[4520]: E0130 06:45:06.279666 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 06:45:08.279656128 +0000 UTC m=+21.908008310 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 06:45:06 crc kubenswrapper[4520]: E0130 06:45:06.279632 4520 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 06:45:06 crc kubenswrapper[4520]: E0130 06:45:06.279683 4520 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 06:45:06 crc kubenswrapper[4520]: E0130 06:45:06.279705 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 06:45:08.279699971 +0000 UTC m=+21.908052152 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 06:45:06 crc kubenswrapper[4520]: E0130 06:45:06.279608 4520 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 06:45:06 crc kubenswrapper[4520]: E0130 06:45:06.279725 4520 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 06:45:06 crc kubenswrapper[4520]: E0130 06:45:06.279731 4520 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 06:45:06 crc kubenswrapper[4520]: E0130 06:45:06.279747 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 06:45:08.279743043 +0000 UTC m=+21.908095224 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.286607 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.294811 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.302219 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.309639 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.318748 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.326333 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.333737 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.346836 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.355663 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.363535 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.654915 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 05:12:46.596520874 +0000 UTC Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.685687 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.685757 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:06 crc kubenswrapper[4520]: E0130 06:45:06.685888 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:45:06 crc kubenswrapper[4520]: E0130 06:45:06.686033 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.685720 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:45:06 crc kubenswrapper[4520]: E0130 06:45:06.686336 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.700184 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.709660 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.718722 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.727655 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.736409 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.745440 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.754055 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.765297 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.773209 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba"} Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.778495 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.792299 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.802477 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.813935 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.822668 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.835066 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.843825 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.853693 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.864554 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:06 crc kubenswrapper[4520]: I0130 06:45:06.873261 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.351147 4520 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.353794 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.353831 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.353842 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.353876 4520 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.359263 4520 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.359530 4520 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.360064 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.360084 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.360092 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.360103 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.360112 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:07Z","lastTransitionTime":"2026-01-30T06:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:07 crc kubenswrapper[4520]: E0130 06:45:07.379181 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:07Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.382418 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.382460 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.382471 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.382486 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.382501 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:07Z","lastTransitionTime":"2026-01-30T06:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:07 crc kubenswrapper[4520]: E0130 06:45:07.393205 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:07Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.399691 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.399727 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.399739 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.399757 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.399768 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:07Z","lastTransitionTime":"2026-01-30T06:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:07 crc kubenswrapper[4520]: E0130 06:45:07.413383 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:07Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.416405 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.416448 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.416459 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.416472 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.416481 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:07Z","lastTransitionTime":"2026-01-30T06:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:07 crc kubenswrapper[4520]: E0130 06:45:07.425308 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:07Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.428095 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.428127 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.428139 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.428154 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.428165 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:07Z","lastTransitionTime":"2026-01-30T06:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:07 crc kubenswrapper[4520]: E0130 06:45:07.438322 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:07Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:07 crc kubenswrapper[4520]: E0130 06:45:07.438428 4520 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.439974 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.440003 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.440012 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.440024 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.440033 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:07Z","lastTransitionTime":"2026-01-30T06:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.542739 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.542782 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.542794 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.542809 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.542820 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:07Z","lastTransitionTime":"2026-01-30T06:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.645230 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.645270 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.645281 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.645297 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.645309 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:07Z","lastTransitionTime":"2026-01-30T06:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.655635 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 04:18:37.394724918 +0000 UTC Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.747312 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.747392 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.747403 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.747431 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.747448 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:07Z","lastTransitionTime":"2026-01-30T06:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.849437 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.849590 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.849601 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.849621 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.849634 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:07Z","lastTransitionTime":"2026-01-30T06:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.951949 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.951989 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.951998 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.952015 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:07 crc kubenswrapper[4520]: I0130 06:45:07.952025 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:07Z","lastTransitionTime":"2026-01-30T06:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.053672 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.053727 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.053743 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.053763 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.053774 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:08Z","lastTransitionTime":"2026-01-30T06:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.156161 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.156212 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.156221 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.156239 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.156253 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:08Z","lastTransitionTime":"2026-01-30T06:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.193595 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:45:08 crc kubenswrapper[4520]: E0130 06:45:08.193741 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:45:12.193719788 +0000 UTC m=+25.822071969 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.258613 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.258902 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.258967 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.259028 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.259081 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:08Z","lastTransitionTime":"2026-01-30T06:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.294327 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.294385 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.294416 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.294446 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:45:08 crc kubenswrapper[4520]: E0130 06:45:08.294656 4520 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 06:45:08 crc kubenswrapper[4520]: E0130 06:45:08.294688 4520 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 06:45:08 crc kubenswrapper[4520]: E0130 06:45:08.294706 4520 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 06:45:08 crc kubenswrapper[4520]: E0130 06:45:08.294780 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 06:45:12.294757977 +0000 UTC m=+25.923110157 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 06:45:08 crc kubenswrapper[4520]: E0130 06:45:08.294818 4520 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 06:45:08 crc kubenswrapper[4520]: E0130 06:45:08.294875 4520 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 06:45:08 crc kubenswrapper[4520]: E0130 06:45:08.294906 4520 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 06:45:08 crc kubenswrapper[4520]: E0130 06:45:08.294913 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 06:45:12.294894995 +0000 UTC m=+25.923247175 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 06:45:08 crc kubenswrapper[4520]: E0130 06:45:08.294920 4520 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 06:45:08 crc kubenswrapper[4520]: E0130 06:45:08.294972 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 06:45:12.294956791 +0000 UTC m=+25.923308971 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 06:45:08 crc kubenswrapper[4520]: E0130 06:45:08.295201 4520 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 06:45:08 crc kubenswrapper[4520]: E0130 06:45:08.295346 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 06:45:12.295327537 +0000 UTC m=+25.923679719 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.327978 4520 csr.go:261] certificate signing request csr-bs7gc is approved, waiting to be issued Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.340555 4520 csr.go:257] certificate signing request csr-bs7gc is issued Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.362032 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.362072 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.362083 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.362100 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.362111 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:08Z","lastTransitionTime":"2026-01-30T06:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.464107 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.464161 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.464183 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.464202 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.464214 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:08Z","lastTransitionTime":"2026-01-30T06:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.565917 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.566189 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.566265 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.566323 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.566388 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:08Z","lastTransitionTime":"2026-01-30T06:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.656660 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 20:02:46.320897655 +0000 UTC Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.669275 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.669329 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.669342 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.669376 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.669390 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:08Z","lastTransitionTime":"2026-01-30T06:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.685601 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.685632 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:45:08 crc kubenswrapper[4520]: E0130 06:45:08.685777 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.685651 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:08 crc kubenswrapper[4520]: E0130 06:45:08.685969 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:45:08 crc kubenswrapper[4520]: E0130 06:45:08.685889 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.772056 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.772087 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.772096 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.772112 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.772123 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:08Z","lastTransitionTime":"2026-01-30T06:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.878622 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.878661 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.878672 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.878686 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.878694 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:08Z","lastTransitionTime":"2026-01-30T06:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.981475 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.981530 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.981541 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.981557 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:08 crc kubenswrapper[4520]: I0130 06:45:08.981567 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:08Z","lastTransitionTime":"2026-01-30T06:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.083827 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.084014 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.084093 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.084149 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.084215 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:09Z","lastTransitionTime":"2026-01-30T06:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.187105 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.187140 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.187149 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.187163 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.187171 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:09Z","lastTransitionTime":"2026-01-30T06:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.261664 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-kdqjc"] Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.262165 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-hf7k5"] Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.262348 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.262442 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hf7k5" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.262679 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-mn7g2"] Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.263442 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-dkqtt"] Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.263592 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.263778 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.267841 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.268339 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.268477 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.268503 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.268579 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.268764 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.268952 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.269207 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.269271 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.269317 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.269297 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.269406 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.269439 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.269528 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.269612 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.290107 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.290128 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.290136 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.290150 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.290158 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:09Z","lastTransitionTime":"2026-01-30T06:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.293148 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.301958 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc2qj\" (UniqueName: \"kubernetes.io/projected/e5f51275-c0b1-4467-bf4a-ef848e3521df-kube-api-access-zc2qj\") pod \"machine-config-daemon-dkqtt\" (UID: \"e5f51275-c0b1-4467-bf4a-ef848e3521df\") " pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.301987 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-multus-conf-dir\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.302004 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-multus-daemon-config\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.302019 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ee18b84b-4e10-42ed-ac93-557943206072-system-cni-dir\") pod \"multus-additional-cni-plugins-kdqjc\" (UID: \"ee18b84b-4e10-42ed-ac93-557943206072\") " pod="openshift-multus/multus-additional-cni-plugins-kdqjc" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.302033 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-host-var-lib-cni-multus\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.302046 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-hostroot\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.302062 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vk69\" (UniqueName: \"kubernetes.io/projected/ee18b84b-4e10-42ed-ac93-557943206072-kube-api-access-7vk69\") pod \"multus-additional-cni-plugins-kdqjc\" (UID: \"ee18b84b-4e10-42ed-ac93-557943206072\") " pod="openshift-multus/multus-additional-cni-plugins-kdqjc" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.302081 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-os-release\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.302093 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-host-run-netns\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.302106 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-etc-kubernetes\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.302120 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ee18b84b-4e10-42ed-ac93-557943206072-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kdqjc\" (UID: \"ee18b84b-4e10-42ed-ac93-557943206072\") " pod="openshift-multus/multus-additional-cni-plugins-kdqjc" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.302133 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-cnibin\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.302145 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhvlk\" (UniqueName: \"kubernetes.io/projected/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-kube-api-access-bhvlk\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.302157 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ee18b84b-4e10-42ed-ac93-557943206072-cnibin\") pod \"multus-additional-cni-plugins-kdqjc\" (UID: \"ee18b84b-4e10-42ed-ac93-557943206072\") " pod="openshift-multus/multus-additional-cni-plugins-kdqjc" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.302172 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-system-cni-dir\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.302195 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-multus-cni-dir\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.302208 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-host-run-multus-certs\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.302221 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1449aaf1-dd5f-42a6-89e3-5cd09937b8a2-hosts-file\") pod \"node-resolver-hf7k5\" (UID: \"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\") " pod="openshift-dns/node-resolver-hf7k5" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.302235 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-cni-binary-copy\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.302250 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-host-run-k8s-cni-cncf-io\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.302262 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ee18b84b-4e10-42ed-ac93-557943206072-cni-binary-copy\") pod \"multus-additional-cni-plugins-kdqjc\" (UID: \"ee18b84b-4e10-42ed-ac93-557943206072\") " pod="openshift-multus/multus-additional-cni-plugins-kdqjc" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.302283 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-host-var-lib-kubelet\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.302301 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-host-var-lib-cni-bin\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.302313 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e5f51275-c0b1-4467-bf4a-ef848e3521df-rootfs\") pod \"machine-config-daemon-dkqtt\" (UID: \"e5f51275-c0b1-4467-bf4a-ef848e3521df\") " pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.302325 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e5f51275-c0b1-4467-bf4a-ef848e3521df-mcd-auth-proxy-config\") pod \"machine-config-daemon-dkqtt\" (UID: \"e5f51275-c0b1-4467-bf4a-ef848e3521df\") " pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.302344 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ee18b84b-4e10-42ed-ac93-557943206072-os-release\") pod \"multus-additional-cni-plugins-kdqjc\" (UID: \"ee18b84b-4e10-42ed-ac93-557943206072\") " pod="openshift-multus/multus-additional-cni-plugins-kdqjc" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.302357 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-multus-socket-dir-parent\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.302369 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ee18b84b-4e10-42ed-ac93-557943206072-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kdqjc\" (UID: \"ee18b84b-4e10-42ed-ac93-557943206072\") " pod="openshift-multus/multus-additional-cni-plugins-kdqjc" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.302381 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqhqx\" (UniqueName: \"kubernetes.io/projected/1449aaf1-dd5f-42a6-89e3-5cd09937b8a2-kube-api-access-sqhqx\") pod \"node-resolver-hf7k5\" (UID: \"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\") " pod="openshift-dns/node-resolver-hf7k5" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.302405 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e5f51275-c0b1-4467-bf4a-ef848e3521df-proxy-tls\") pod \"machine-config-daemon-dkqtt\" (UID: \"e5f51275-c0b1-4467-bf4a-ef848e3521df\") " pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.311264 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.327432 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.342293 4520 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-30 06:40:08 +0000 UTC, rotation deadline is 2026-12-11 03:42:58.39563605 +0000 UTC Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.342343 4520 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7556h57m49.053295536s for next certificate rotation Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.343289 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.357694 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.378161 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.386545 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hf7k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqhqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hf7k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.392420 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.392454 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.392464 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.392481 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.392491 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:09Z","lastTransitionTime":"2026-01-30T06:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.403254 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-cni-binary-copy\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.403367 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-host-run-k8s-cni-cncf-io\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.403415 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-host-var-lib-kubelet\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.403442 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ee18b84b-4e10-42ed-ac93-557943206072-cni-binary-copy\") pod \"multus-additional-cni-plugins-kdqjc\" (UID: \"ee18b84b-4e10-42ed-ac93-557943206072\") " pod="openshift-multus/multus-additional-cni-plugins-kdqjc" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.403494 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-host-run-k8s-cni-cncf-io\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.403539 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-host-var-lib-cni-bin\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.403595 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e5f51275-c0b1-4467-bf4a-ef848e3521df-rootfs\") pod \"machine-config-daemon-dkqtt\" (UID: \"e5f51275-c0b1-4467-bf4a-ef848e3521df\") " pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.403610 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-host-var-lib-cni-bin\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.403610 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-host-var-lib-kubelet\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.403623 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e5f51275-c0b1-4467-bf4a-ef848e3521df-mcd-auth-proxy-config\") pod \"machine-config-daemon-dkqtt\" (UID: \"e5f51275-c0b1-4467-bf4a-ef848e3521df\") " pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.403685 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ee18b84b-4e10-42ed-ac93-557943206072-os-release\") pod \"multus-additional-cni-plugins-kdqjc\" (UID: \"ee18b84b-4e10-42ed-ac93-557943206072\") " pod="openshift-multus/multus-additional-cni-plugins-kdqjc" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.403691 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e5f51275-c0b1-4467-bf4a-ef848e3521df-rootfs\") pod \"machine-config-daemon-dkqtt\" (UID: \"e5f51275-c0b1-4467-bf4a-ef848e3521df\") " pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.403728 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-multus-socket-dir-parent\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.403748 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ee18b84b-4e10-42ed-ac93-557943206072-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kdqjc\" (UID: \"ee18b84b-4e10-42ed-ac93-557943206072\") " pod="openshift-multus/multus-additional-cni-plugins-kdqjc" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.403766 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqhqx\" (UniqueName: \"kubernetes.io/projected/1449aaf1-dd5f-42a6-89e3-5cd09937b8a2-kube-api-access-sqhqx\") pod \"node-resolver-hf7k5\" (UID: \"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\") " pod="openshift-dns/node-resolver-hf7k5" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.403784 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e5f51275-c0b1-4467-bf4a-ef848e3521df-proxy-tls\") pod \"machine-config-daemon-dkqtt\" (UID: \"e5f51275-c0b1-4467-bf4a-ef848e3521df\") " pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.403799 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-multus-conf-dir\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.403816 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc2qj\" (UniqueName: \"kubernetes.io/projected/e5f51275-c0b1-4467-bf4a-ef848e3521df-kube-api-access-zc2qj\") pod \"machine-config-daemon-dkqtt\" (UID: \"e5f51275-c0b1-4467-bf4a-ef848e3521df\") " pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.403834 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-multus-daemon-config\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.403849 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ee18b84b-4e10-42ed-ac93-557943206072-system-cni-dir\") pod \"multus-additional-cni-plugins-kdqjc\" (UID: \"ee18b84b-4e10-42ed-ac93-557943206072\") " pod="openshift-multus/multus-additional-cni-plugins-kdqjc" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.403863 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-host-var-lib-cni-multus\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.403875 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-hostroot\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.403889 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vk69\" (UniqueName: \"kubernetes.io/projected/ee18b84b-4e10-42ed-ac93-557943206072-kube-api-access-7vk69\") pod \"multus-additional-cni-plugins-kdqjc\" (UID: \"ee18b84b-4e10-42ed-ac93-557943206072\") " pod="openshift-multus/multus-additional-cni-plugins-kdqjc" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.403909 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-cnibin\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.403923 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-os-release\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.403936 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-host-run-netns\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.403951 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-etc-kubernetes\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.403966 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ee18b84b-4e10-42ed-ac93-557943206072-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kdqjc\" (UID: \"ee18b84b-4e10-42ed-ac93-557943206072\") " pod="openshift-multus/multus-additional-cni-plugins-kdqjc" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.403995 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-system-cni-dir\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.404009 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhvlk\" (UniqueName: \"kubernetes.io/projected/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-kube-api-access-bhvlk\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.404022 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ee18b84b-4e10-42ed-ac93-557943206072-cnibin\") pod \"multus-additional-cni-plugins-kdqjc\" (UID: \"ee18b84b-4e10-42ed-ac93-557943206072\") " pod="openshift-multus/multus-additional-cni-plugins-kdqjc" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.404040 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-multus-cni-dir\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.404096 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-host-run-multus-certs\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.404113 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1449aaf1-dd5f-42a6-89e3-5cd09937b8a2-hosts-file\") pod \"node-resolver-hf7k5\" (UID: \"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\") " pod="openshift-dns/node-resolver-hf7k5" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.404205 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1449aaf1-dd5f-42a6-89e3-5cd09937b8a2-hosts-file\") pod \"node-resolver-hf7k5\" (UID: \"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\") " pod="openshift-dns/node-resolver-hf7k5" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.404252 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ee18b84b-4e10-42ed-ac93-557943206072-os-release\") pod \"multus-additional-cni-plugins-kdqjc\" (UID: \"ee18b84b-4e10-42ed-ac93-557943206072\") " pod="openshift-multus/multus-additional-cni-plugins-kdqjc" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.404250 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-cni-binary-copy\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.404286 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-multus-socket-dir-parent\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.404321 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-cnibin\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.404352 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-os-release\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.404374 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-host-run-netns\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.404394 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-etc-kubernetes\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.404528 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e5f51275-c0b1-4467-bf4a-ef848e3521df-mcd-auth-proxy-config\") pod \"machine-config-daemon-dkqtt\" (UID: \"e5f51275-c0b1-4467-bf4a-ef848e3521df\") " pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.404637 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-system-cni-dir\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.404681 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ee18b84b-4e10-42ed-ac93-557943206072-cni-binary-copy\") pod \"multus-additional-cni-plugins-kdqjc\" (UID: \"ee18b84b-4e10-42ed-ac93-557943206072\") " pod="openshift-multus/multus-additional-cni-plugins-kdqjc" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.404735 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-host-var-lib-cni-multus\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.404782 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ee18b84b-4e10-42ed-ac93-557943206072-system-cni-dir\") pod \"multus-additional-cni-plugins-kdqjc\" (UID: \"ee18b84b-4e10-42ed-ac93-557943206072\") " pod="openshift-multus/multus-additional-cni-plugins-kdqjc" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.404748 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ee18b84b-4e10-42ed-ac93-557943206072-cnibin\") pod \"multus-additional-cni-plugins-kdqjc\" (UID: \"ee18b84b-4e10-42ed-ac93-557943206072\") " pod="openshift-multus/multus-additional-cni-plugins-kdqjc" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.404873 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-multus-cni-dir\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.404934 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-hostroot\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.404890 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-host-run-multus-certs\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.405018 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ee18b84b-4e10-42ed-ac93-557943206072-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kdqjc\" (UID: \"ee18b84b-4e10-42ed-ac93-557943206072\") " pod="openshift-multus/multus-additional-cni-plugins-kdqjc" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.405073 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-multus-conf-dir\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.405284 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-multus-daemon-config\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.405393 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ee18b84b-4e10-42ed-ac93-557943206072-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kdqjc\" (UID: \"ee18b84b-4e10-42ed-ac93-557943206072\") " pod="openshift-multus/multus-additional-cni-plugins-kdqjc" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.407317 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.412571 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e5f51275-c0b1-4467-bf4a-ef848e3521df-proxy-tls\") pod \"machine-config-daemon-dkqtt\" (UID: \"e5f51275-c0b1-4467-bf4a-ef848e3521df\") " pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.422199 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqhqx\" (UniqueName: \"kubernetes.io/projected/1449aaf1-dd5f-42a6-89e3-5cd09937b8a2-kube-api-access-sqhqx\") pod \"node-resolver-hf7k5\" (UID: \"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\") " pod="openshift-dns/node-resolver-hf7k5" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.427507 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vk69\" (UniqueName: \"kubernetes.io/projected/ee18b84b-4e10-42ed-ac93-557943206072-kube-api-access-7vk69\") pod \"multus-additional-cni-plugins-kdqjc\" (UID: \"ee18b84b-4e10-42ed-ac93-557943206072\") " pod="openshift-multus/multus-additional-cni-plugins-kdqjc" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.430131 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.430455 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc2qj\" (UniqueName: \"kubernetes.io/projected/e5f51275-c0b1-4467-bf4a-ef848e3521df-kube-api-access-zc2qj\") pod \"machine-config-daemon-dkqtt\" (UID: \"e5f51275-c0b1-4467-bf4a-ef848e3521df\") " pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.432967 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhvlk\" (UniqueName: \"kubernetes.io/projected/dfdf507d-4d3e-40ac-a9dc-c39c411f4c26-kube-api-access-bhvlk\") pod \"multus-mn7g2\" (UID: \"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\") " pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.439385 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.449645 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee18b84b-4e10-42ed-ac93-557943206072\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kdqjc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.458500 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.467042 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.476611 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.484623 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.491572 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hf7k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqhqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hf7k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.494013 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.494060 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.494070 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.494084 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.494093 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:09Z","lastTransitionTime":"2026-01-30T06:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.500267 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f51275-c0b1-4467-bf4a-ef848e3521df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dkqtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.509704 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.519448 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.536848 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee18b84b-4e10-42ed-ac93-557943206072\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kdqjc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.553618 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.563337 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.572237 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mn7g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhvlk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mn7g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.573918 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.581166 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hf7k5" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.581971 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.590445 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.597573 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.597598 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.597606 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.597620 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.597629 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:09Z","lastTransitionTime":"2026-01-30T06:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.597781 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-mn7g2" Jan 30 06:45:09 crc kubenswrapper[4520]: W0130 06:45:09.604195 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5f51275_c0b1_4467_bf4a_ef848e3521df.slice/crio-408fcc09d0d214cdc4c0b2da6109844e6e6bcc7b67e5331aeffc3bb4cb014bed WatchSource:0}: Error finding container 408fcc09d0d214cdc4c0b2da6109844e6e6bcc7b67e5331aeffc3bb4cb014bed: Status 404 returned error can't find the container with id 408fcc09d0d214cdc4c0b2da6109844e6e6bcc7b67e5331aeffc3bb4cb014bed Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.649829 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6tm5s"] Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.651301 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.653258 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.653285 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.653498 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.653613 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.653734 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.654052 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.654385 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.659402 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 23:10:33.417687512 +0000 UTC Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.664222 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.676098 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.685144 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hf7k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqhqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hf7k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.695424 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f51275-c0b1-4467-bf4a-ef848e3521df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dkqtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.699218 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.699251 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.699263 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.699278 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.699289 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:09Z","lastTransitionTime":"2026-01-30T06:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.706365 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-kubelet\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.706416 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-slash\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.706438 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/705f09bd-e1b6-47fd-83db-189fbe9a7b95-ovn-node-metrics-cert\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.706459 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-etc-openvswitch\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.706493 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-run-systemd\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.706531 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-var-lib-openvswitch\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.706556 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-cni-netd\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.706575 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-cni-bin\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.706593 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.707145 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-run-netns\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.707205 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-run-ovn-kubernetes\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.707204 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.707357 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/705f09bd-e1b6-47fd-83db-189fbe9a7b95-ovnkube-script-lib\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.707380 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc94g\" (UniqueName: \"kubernetes.io/projected/705f09bd-e1b6-47fd-83db-189fbe9a7b95-kube-api-access-zc94g\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.707438 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/705f09bd-e1b6-47fd-83db-189fbe9a7b95-ovnkube-config\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.709654 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-systemd-units\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.709805 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-run-ovn\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.709872 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-run-openvswitch\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.709913 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-log-socket\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.709947 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/705f09bd-e1b6-47fd-83db-189fbe9a7b95-env-overrides\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.710024 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-node-log\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.717933 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.726932 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.736599 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.755143 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.766468 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.782225 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee18b84b-4e10-42ed-ac93-557943206072\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kdqjc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.783749 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" event={"ID":"ee18b84b-4e10-42ed-ac93-557943206072","Type":"ContainerStarted","Data":"41e26989503e9b7c80fb0df15f02ae0b13141633e56df299595b1d1f4ee05ca2"} Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.786909 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mn7g2" event={"ID":"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26","Type":"ContainerStarted","Data":"fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545"} Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.786941 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mn7g2" event={"ID":"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26","Type":"ContainerStarted","Data":"021569ddb27e9b061afdcdbdb6bf3f62dd10802db12274cc920d9643a34db3a6"} Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.790917 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerStarted","Data":"bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26"} Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.790960 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerStarted","Data":"408fcc09d0d214cdc4c0b2da6109844e6e6bcc7b67e5331aeffc3bb4cb014bed"} Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.791659 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hf7k5" event={"ID":"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2","Type":"ContainerStarted","Data":"5aa10fffc5b862bacbb53701464ce0c9ff6c6dcbb8ca4016f6344796172d9424"} Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.801070 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.801104 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.801114 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.801127 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.801139 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:09Z","lastTransitionTime":"2026-01-30T06:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.803794 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705f09bd-e1b6-47fd-83db-189fbe9a7b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tm5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.811029 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-run-netns\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.811056 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-run-ovn-kubernetes\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.811076 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/705f09bd-e1b6-47fd-83db-189fbe9a7b95-ovnkube-script-lib\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.811092 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc94g\" (UniqueName: \"kubernetes.io/projected/705f09bd-e1b6-47fd-83db-189fbe9a7b95-kube-api-access-zc94g\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.811117 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/705f09bd-e1b6-47fd-83db-189fbe9a7b95-ovnkube-config\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.811145 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-systemd-units\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.811159 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-run-ovn\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.811174 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-run-openvswitch\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.811198 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-log-socket\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.811214 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/705f09bd-e1b6-47fd-83db-189fbe9a7b95-env-overrides\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.811231 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-node-log\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.811251 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-kubelet\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.811265 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-slash\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.811280 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/705f09bd-e1b6-47fd-83db-189fbe9a7b95-ovn-node-metrics-cert\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.811299 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-etc-openvswitch\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.811314 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-run-systemd\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.811331 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-var-lib-openvswitch\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.811347 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-cni-netd\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.811363 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-cni-bin\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.811377 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.811438 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.811472 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-run-netns\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.811492 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-run-ovn-kubernetes\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.812027 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/705f09bd-e1b6-47fd-83db-189fbe9a7b95-ovnkube-script-lib\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.812647 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/705f09bd-e1b6-47fd-83db-189fbe9a7b95-ovnkube-config\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.812686 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-systemd-units\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.812711 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-run-ovn\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.812732 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-run-openvswitch\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.812755 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-log-socket\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.813034 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/705f09bd-e1b6-47fd-83db-189fbe9a7b95-env-overrides\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.813074 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-node-log\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.813097 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-kubelet\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.813119 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-slash\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.813424 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-var-lib-openvswitch\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.813462 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-etc-openvswitch\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.813485 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-run-systemd\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.813629 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-cni-netd\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.813661 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-cni-bin\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.816930 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.818811 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/705f09bd-e1b6-47fd-83db-189fbe9a7b95-ovn-node-metrics-cert\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.827139 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc94g\" (UniqueName: \"kubernetes.io/projected/705f09bd-e1b6-47fd-83db-189fbe9a7b95-kube-api-access-zc94g\") pod \"ovnkube-node-6tm5s\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.830851 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mn7g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhvlk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mn7g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.840138 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.849885 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.858372 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.866342 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.875561 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.883186 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hf7k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqhqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hf7k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.891424 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f51275-c0b1-4467-bf4a-ef848e3521df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dkqtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.902654 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.902682 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.902691 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.902707 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.902716 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:09Z","lastTransitionTime":"2026-01-30T06:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.904215 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.917275 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.930058 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee18b84b-4e10-42ed-ac93-557943206072\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kdqjc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.962836 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705f09bd-e1b6-47fd-83db-189fbe9a7b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tm5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.978871 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:09 crc kubenswrapper[4520]: W0130 06:45:09.989566 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod705f09bd_e1b6_47fd_83db_189fbe9a7b95.slice/crio-b4b099ea8e0891d3de244a88fda2e4e91bb5cb4c6c534b366fcf81c2e100acc7 WatchSource:0}: Error finding container b4b099ea8e0891d3de244a88fda2e4e91bb5cb4c6c534b366fcf81c2e100acc7: Status 404 returned error can't find the container with id b4b099ea8e0891d3de244a88fda2e4e91bb5cb4c6c534b366fcf81c2e100acc7 Jan 30 06:45:09 crc kubenswrapper[4520]: I0130 06:45:09.997155 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:09Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.004282 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.004316 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.004326 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.004341 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.004350 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:10Z","lastTransitionTime":"2026-01-30T06:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.015597 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:10Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.031697 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mn7g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhvlk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mn7g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:10Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.109202 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.109239 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.109249 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.109263 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.109271 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:10Z","lastTransitionTime":"2026-01-30T06:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.211140 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.211176 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.211194 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.211209 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.211218 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:10Z","lastTransitionTime":"2026-01-30T06:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.314797 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.315018 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.315028 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.315045 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.315054 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:10Z","lastTransitionTime":"2026-01-30T06:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.417528 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.417549 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.417557 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.417567 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.417575 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:10Z","lastTransitionTime":"2026-01-30T06:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.519315 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.519352 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.519361 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.519373 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.519406 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:10Z","lastTransitionTime":"2026-01-30T06:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.621314 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.621341 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.621351 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.621364 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.621371 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:10Z","lastTransitionTime":"2026-01-30T06:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.659568 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 22:43:49.312216892 +0000 UTC Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.684812 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:10 crc kubenswrapper[4520]: E0130 06:45:10.684903 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.684820 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:45:10 crc kubenswrapper[4520]: E0130 06:45:10.684966 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.684927 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:45:10 crc kubenswrapper[4520]: E0130 06:45:10.685009 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.722871 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.722900 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.722911 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.722925 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.722934 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:10Z","lastTransitionTime":"2026-01-30T06:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.795447 4520 generic.go:334] "Generic (PLEG): container finished" podID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerID="56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154" exitCode=0 Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.795570 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" event={"ID":"705f09bd-e1b6-47fd-83db-189fbe9a7b95","Type":"ContainerDied","Data":"56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154"} Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.795793 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" event={"ID":"705f09bd-e1b6-47fd-83db-189fbe9a7b95","Type":"ContainerStarted","Data":"b4b099ea8e0891d3de244a88fda2e4e91bb5cb4c6c534b366fcf81c2e100acc7"} Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.798338 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerStarted","Data":"24e259c411b8e91626ab987a1ca449092d507e84f0e06c3cd291b6e8498099a1"} Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.799705 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hf7k5" event={"ID":"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2","Type":"ContainerStarted","Data":"5aedbdb4a22aec02ade41b850034115ba0e6b584e2e7195b6ab548ef4291665a"} Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.801341 4520 generic.go:334] "Generic (PLEG): container finished" podID="ee18b84b-4e10-42ed-ac93-557943206072" containerID="3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb" exitCode=0 Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.801370 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" event={"ID":"ee18b84b-4e10-42ed-ac93-557943206072","Type":"ContainerDied","Data":"3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb"} Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.810385 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:10Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.823857 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:10Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.825176 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.825210 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.825221 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.825235 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.825243 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:10Z","lastTransitionTime":"2026-01-30T06:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.833441 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:10Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.850584 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:10Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.860614 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hf7k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqhqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hf7k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:10Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.869939 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f51275-c0b1-4467-bf4a-ef848e3521df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dkqtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:10Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.888709 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:10Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.899119 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:10Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.911133 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee18b84b-4e10-42ed-ac93-557943206072\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kdqjc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:10Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.924969 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705f09bd-e1b6-47fd-83db-189fbe9a7b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tm5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:10Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.928559 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.928585 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.928596 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.928611 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.928620 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:10Z","lastTransitionTime":"2026-01-30T06:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.940734 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:10Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.951549 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:10Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.962543 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:10Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.981390 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mn7g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhvlk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mn7g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:10Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:10 crc kubenswrapper[4520]: I0130 06:45:10.995722 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:10Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.008822 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:11Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.019333 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:11Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.028056 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:11Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.030538 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.030582 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.030593 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.030609 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.030618 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:11Z","lastTransitionTime":"2026-01-30T06:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.037689 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:11Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.045836 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hf7k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aedbdb4a22aec02ade41b850034115ba0e6b584e2e7195b6ab548ef4291665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqhqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hf7k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:11Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.059253 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f51275-c0b1-4467-bf4a-ef848e3521df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e259c411b8e91626ab987a1ca449092d507e84f0e06c3cd291b6e8498099a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dkqtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:11Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.078330 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:11Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.089924 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:11Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.102430 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee18b84b-4e10-42ed-ac93-557943206072\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kdqjc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:11Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.116895 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705f09bd-e1b6-47fd-83db-189fbe9a7b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tm5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:11Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.126749 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:11Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.132584 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.132623 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.132635 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.132656 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.132671 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:11Z","lastTransitionTime":"2026-01-30T06:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.135822 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mn7g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhvlk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mn7g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:11Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.144488 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:11Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.234947 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.234980 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.234991 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.235011 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.235024 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:11Z","lastTransitionTime":"2026-01-30T06:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.337850 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.337886 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.337897 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.337948 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.337964 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:11Z","lastTransitionTime":"2026-01-30T06:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.439901 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.439929 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.439939 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.439953 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.439961 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:11Z","lastTransitionTime":"2026-01-30T06:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.542155 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.542328 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.542344 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.542371 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.542388 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:11Z","lastTransitionTime":"2026-01-30T06:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.644217 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.644241 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.644251 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.644260 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.644266 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:11Z","lastTransitionTime":"2026-01-30T06:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.659703 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 13:30:59.220825871 +0000 UTC Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.746122 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.746168 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.746177 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.746206 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.746216 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:11Z","lastTransitionTime":"2026-01-30T06:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.805921 4520 generic.go:334] "Generic (PLEG): container finished" podID="ee18b84b-4e10-42ed-ac93-557943206072" containerID="37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4" exitCode=0 Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.805991 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" event={"ID":"ee18b84b-4e10-42ed-ac93-557943206072","Type":"ContainerDied","Data":"37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4"} Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.810225 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" event={"ID":"705f09bd-e1b6-47fd-83db-189fbe9a7b95","Type":"ContainerStarted","Data":"40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5"} Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.810252 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" event={"ID":"705f09bd-e1b6-47fd-83db-189fbe9a7b95","Type":"ContainerStarted","Data":"bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f"} Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.810261 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" event={"ID":"705f09bd-e1b6-47fd-83db-189fbe9a7b95","Type":"ContainerStarted","Data":"df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157"} Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.810269 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" event={"ID":"705f09bd-e1b6-47fd-83db-189fbe9a7b95","Type":"ContainerStarted","Data":"f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236"} Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.810276 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" event={"ID":"705f09bd-e1b6-47fd-83db-189fbe9a7b95","Type":"ContainerStarted","Data":"7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7"} Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.810286 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" event={"ID":"705f09bd-e1b6-47fd-83db-189fbe9a7b95","Type":"ContainerStarted","Data":"498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97"} Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.819497 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:11Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.833112 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mn7g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhvlk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mn7g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:11Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.842734 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:11Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.848498 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.848544 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.848554 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.848573 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.848585 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:11Z","lastTransitionTime":"2026-01-30T06:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.853023 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f51275-c0b1-4467-bf4a-ef848e3521df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e259c411b8e91626ab987a1ca449092d507e84f0e06c3cd291b6e8498099a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dkqtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:11Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.865415 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:11Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.874117 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:11Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.882468 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:11Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.892647 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:11Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.901777 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:11Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.909348 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hf7k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aedbdb4a22aec02ade41b850034115ba0e6b584e2e7195b6ab548ef4291665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqhqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hf7k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:11Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.924774 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:11Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.934778 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:11Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.946786 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee18b84b-4e10-42ed-ac93-557943206072\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kdqjc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:11Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.951803 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.951838 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.951850 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.951873 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.951888 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:11Z","lastTransitionTime":"2026-01-30T06:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:11 crc kubenswrapper[4520]: I0130 06:45:11.961692 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705f09bd-e1b6-47fd-83db-189fbe9a7b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tm5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:11Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.054430 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.054467 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.054478 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.054498 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.054509 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:12Z","lastTransitionTime":"2026-01-30T06:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.156000 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.156047 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.156070 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.156090 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.156104 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:12Z","lastTransitionTime":"2026-01-30T06:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.233936 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:45:12 crc kubenswrapper[4520]: E0130 06:45:12.234174 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:45:20.234150431 +0000 UTC m=+33.862502613 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.258573 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.258744 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.258805 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.258872 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.258926 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:12Z","lastTransitionTime":"2026-01-30T06:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.335148 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.335307 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.335398 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.335462 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:12 crc kubenswrapper[4520]: E0130 06:45:12.335535 4520 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 06:45:12 crc kubenswrapper[4520]: E0130 06:45:12.335576 4520 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 06:45:12 crc kubenswrapper[4520]: E0130 06:45:12.335593 4520 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 06:45:12 crc kubenswrapper[4520]: E0130 06:45:12.335620 4520 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 06:45:12 crc kubenswrapper[4520]: E0130 06:45:12.335647 4520 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 06:45:12 crc kubenswrapper[4520]: E0130 06:45:12.335659 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 06:45:20.335641622 +0000 UTC m=+33.963993813 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 06:45:12 crc kubenswrapper[4520]: E0130 06:45:12.335661 4520 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 06:45:12 crc kubenswrapper[4520]: E0130 06:45:12.335664 4520 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 06:45:12 crc kubenswrapper[4520]: E0130 06:45:12.335744 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 06:45:20.335725068 +0000 UTC m=+33.964077259 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 06:45:12 crc kubenswrapper[4520]: E0130 06:45:12.335849 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 06:45:20.335817391 +0000 UTC m=+33.964169573 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 06:45:12 crc kubenswrapper[4520]: E0130 06:45:12.335921 4520 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 06:45:12 crc kubenswrapper[4520]: E0130 06:45:12.336004 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 06:45:20.335989525 +0000 UTC m=+33.964341696 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.361299 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.361341 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.361354 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.361377 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.361391 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:12Z","lastTransitionTime":"2026-01-30T06:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.464217 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.464254 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.464264 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.464278 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.464290 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:12Z","lastTransitionTime":"2026-01-30T06:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.566809 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.566868 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.566883 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.566910 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.566924 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:12Z","lastTransitionTime":"2026-01-30T06:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.660241 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 05:40:01.886532656 +0000 UTC Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.669109 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.669156 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.669169 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.669194 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.669208 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:12Z","lastTransitionTime":"2026-01-30T06:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.685558 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.685584 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.685707 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:45:12 crc kubenswrapper[4520]: E0130 06:45:12.685721 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:45:12 crc kubenswrapper[4520]: E0130 06:45:12.685931 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:45:12 crc kubenswrapper[4520]: E0130 06:45:12.685982 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.771946 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.771971 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.771980 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.771991 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.772001 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:12Z","lastTransitionTime":"2026-01-30T06:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.815373 4520 generic.go:334] "Generic (PLEG): container finished" podID="ee18b84b-4e10-42ed-ac93-557943206072" containerID="5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58" exitCode=0 Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.815414 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" event={"ID":"ee18b84b-4e10-42ed-ac93-557943206072","Type":"ContainerDied","Data":"5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58"} Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.827906 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:12Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.842289 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:12Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.858329 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:12Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.869872 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:12Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.873383 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.873419 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.873430 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.873446 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.873456 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:12Z","lastTransitionTime":"2026-01-30T06:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.879463 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:12Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.888539 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:12Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.896895 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hf7k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aedbdb4a22aec02ade41b850034115ba0e6b584e2e7195b6ab548ef4291665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqhqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hf7k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:12Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.904936 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f51275-c0b1-4467-bf4a-ef848e3521df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e259c411b8e91626ab987a1ca449092d507e84f0e06c3cd291b6e8498099a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dkqtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:12Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.917836 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:12Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.926837 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:12Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.936805 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee18b84b-4e10-42ed-ac93-557943206072\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kdqjc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:12Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.953027 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705f09bd-e1b6-47fd-83db-189fbe9a7b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tm5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:12Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.963854 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:12Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.972893 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mn7g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhvlk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mn7g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:12Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.975499 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.975555 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.975566 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.975581 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:12 crc kubenswrapper[4520]: I0130 06:45:12.975591 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:12Z","lastTransitionTime":"2026-01-30T06:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.078106 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.078141 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.078151 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.078165 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.078176 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:13Z","lastTransitionTime":"2026-01-30T06:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.180562 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.180661 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.180729 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.180804 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.180872 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:13Z","lastTransitionTime":"2026-01-30T06:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.282762 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.282807 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.282817 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.282837 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.282851 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:13Z","lastTransitionTime":"2026-01-30T06:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.384592 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.384693 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.384784 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.384857 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.384924 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:13Z","lastTransitionTime":"2026-01-30T06:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.486965 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.487007 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.487018 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.487031 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.487042 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:13Z","lastTransitionTime":"2026-01-30T06:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.589420 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.589462 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.589478 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.589505 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.589536 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:13Z","lastTransitionTime":"2026-01-30T06:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.661307 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 05:00:44.717738687 +0000 UTC Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.692060 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.692112 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.692124 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.692143 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.692154 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:13Z","lastTransitionTime":"2026-01-30T06:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.794532 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.794586 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.794596 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.794608 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.794620 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:13Z","lastTransitionTime":"2026-01-30T06:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.821106 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" event={"ID":"705f09bd-e1b6-47fd-83db-189fbe9a7b95","Type":"ContainerStarted","Data":"7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a"} Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.823264 4520 generic.go:334] "Generic (PLEG): container finished" podID="ee18b84b-4e10-42ed-ac93-557943206072" containerID="5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c" exitCode=0 Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.823296 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" event={"ID":"ee18b84b-4e10-42ed-ac93-557943206072","Type":"ContainerDied","Data":"5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c"} Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.840134 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:13Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.851262 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:13Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.861310 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:13Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.870315 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hf7k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aedbdb4a22aec02ade41b850034115ba0e6b584e2e7195b6ab548ef4291665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqhqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hf7k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:13Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.880526 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f51275-c0b1-4467-bf4a-ef848e3521df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e259c411b8e91626ab987a1ca449092d507e84f0e06c3cd291b6e8498099a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dkqtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:13Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.890557 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:13Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.896851 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.896875 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.896884 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.896900 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.896911 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:13Z","lastTransitionTime":"2026-01-30T06:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.901049 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:13Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.911570 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee18b84b-4e10-42ed-ac93-557943206072\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kdqjc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:13Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.925629 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705f09bd-e1b6-47fd-83db-189fbe9a7b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tm5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:13Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.941738 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:13Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.953349 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:13Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.963690 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:13Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.973793 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mn7g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhvlk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mn7g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:13Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:13 crc kubenswrapper[4520]: I0130 06:45:13.984473 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:13Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.001582 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.001730 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.001814 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.001876 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.001932 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:14Z","lastTransitionTime":"2026-01-30T06:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.104397 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.104439 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.104451 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.104470 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.104483 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:14Z","lastTransitionTime":"2026-01-30T06:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.206497 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.206540 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.206552 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.206567 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.206578 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:14Z","lastTransitionTime":"2026-01-30T06:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.260841 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.296388 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:14Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.305170 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:14Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.310935 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.310967 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.310979 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.310996 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.311009 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:14Z","lastTransitionTime":"2026-01-30T06:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.315681 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee18b84b-4e10-42ed-ac93-557943206072\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kdqjc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:14Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.345054 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705f09bd-e1b6-47fd-83db-189fbe9a7b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tm5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:14Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.353305 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:14Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.361360 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mn7g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhvlk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mn7g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:14Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.369401 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:14Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.377615 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f51275-c0b1-4467-bf4a-ef848e3521df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e259c411b8e91626ab987a1ca449092d507e84f0e06c3cd291b6e8498099a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dkqtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:14Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.390567 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:14Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.402083 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:14Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.413383 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.413468 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.413535 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.413590 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.413654 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:14Z","lastTransitionTime":"2026-01-30T06:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.413618 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:14Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.424173 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:14Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.431735 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:14Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.438157 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hf7k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aedbdb4a22aec02ade41b850034115ba0e6b584e2e7195b6ab548ef4291665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqhqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hf7k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:14Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.515538 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.515561 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.515569 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.515584 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.515595 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:14Z","lastTransitionTime":"2026-01-30T06:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.617150 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.617377 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.617388 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.617401 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.617409 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:14Z","lastTransitionTime":"2026-01-30T06:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.662304 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 15:32:34.669554277 +0000 UTC Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.684669 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:45:14 crc kubenswrapper[4520]: E0130 06:45:14.684782 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.684999 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.685060 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:14 crc kubenswrapper[4520]: E0130 06:45:14.685113 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:45:14 crc kubenswrapper[4520]: E0130 06:45:14.685228 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.719562 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.719585 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.719593 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.719621 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.719630 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:14Z","lastTransitionTime":"2026-01-30T06:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.821779 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.821806 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.821817 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.821831 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.821843 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:14Z","lastTransitionTime":"2026-01-30T06:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.828751 4520 generic.go:334] "Generic (PLEG): container finished" podID="ee18b84b-4e10-42ed-ac93-557943206072" containerID="e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb" exitCode=0 Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.828784 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" event={"ID":"ee18b84b-4e10-42ed-ac93-557943206072","Type":"ContainerDied","Data":"e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb"} Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.840938 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:14Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.855933 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee18b84b-4e10-42ed-ac93-557943206072\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kdqjc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:14Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.872332 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705f09bd-e1b6-47fd-83db-189fbe9a7b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tm5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:14Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.889871 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:14Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.902661 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:14Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.919653 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mn7g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhvlk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mn7g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:14Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.924701 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.924732 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.924745 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.924765 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.924778 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:14Z","lastTransitionTime":"2026-01-30T06:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.928410 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:14Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.936576 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:14Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.946584 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:14Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.957315 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:14Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.967917 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:14Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.980978 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hf7k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aedbdb4a22aec02ade41b850034115ba0e6b584e2e7195b6ab548ef4291665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqhqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hf7k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:14Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:14 crc kubenswrapper[4520]: I0130 06:45:14.990180 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f51275-c0b1-4467-bf4a-ef848e3521df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e259c411b8e91626ab987a1ca449092d507e84f0e06c3cd291b6e8498099a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dkqtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:14Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.004044 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:14Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.026832 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.026860 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.026869 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.026886 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.026896 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:15Z","lastTransitionTime":"2026-01-30T06:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.079766 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-t6th8"] Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.080268 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-t6th8" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.082087 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.082539 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.082630 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.082881 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.091093 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:15Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.100089 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mn7g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhvlk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mn7g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:15Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.109360 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:15Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.117937 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6th8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed0fb361-02d3-4a8d-90c6-2c386499c01f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6th8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:15Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.128300 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:15Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.129825 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.129880 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.129892 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.129910 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.129922 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:15Z","lastTransitionTime":"2026-01-30T06:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.138506 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:15Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.150465 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:15Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.160239 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:15Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.165658 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg4lr\" (UniqueName: \"kubernetes.io/projected/ed0fb361-02d3-4a8d-90c6-2c386499c01f-kube-api-access-lg4lr\") pod \"node-ca-t6th8\" (UID: \"ed0fb361-02d3-4a8d-90c6-2c386499c01f\") " pod="openshift-image-registry/node-ca-t6th8" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.165796 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ed0fb361-02d3-4a8d-90c6-2c386499c01f-host\") pod \"node-ca-t6th8\" (UID: \"ed0fb361-02d3-4a8d-90c6-2c386499c01f\") " pod="openshift-image-registry/node-ca-t6th8" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.165925 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ed0fb361-02d3-4a8d-90c6-2c386499c01f-serviceca\") pod \"node-ca-t6th8\" (UID: \"ed0fb361-02d3-4a8d-90c6-2c386499c01f\") " pod="openshift-image-registry/node-ca-t6th8" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.168844 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:15Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.176408 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hf7k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aedbdb4a22aec02ade41b850034115ba0e6b584e2e7195b6ab548ef4291665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqhqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hf7k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:15Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.185596 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f51275-c0b1-4467-bf4a-ef848e3521df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e259c411b8e91626ab987a1ca449092d507e84f0e06c3cd291b6e8498099a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dkqtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:15Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.199559 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:15Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.209172 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:15Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.219756 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee18b84b-4e10-42ed-ac93-557943206072\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kdqjc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:15Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.231749 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.231784 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.231797 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.231815 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.231826 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:15Z","lastTransitionTime":"2026-01-30T06:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.232029 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705f09bd-e1b6-47fd-83db-189fbe9a7b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tm5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:15Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.267368 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ed0fb361-02d3-4a8d-90c6-2c386499c01f-serviceca\") pod \"node-ca-t6th8\" (UID: \"ed0fb361-02d3-4a8d-90c6-2c386499c01f\") " pod="openshift-image-registry/node-ca-t6th8" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.267438 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg4lr\" (UniqueName: \"kubernetes.io/projected/ed0fb361-02d3-4a8d-90c6-2c386499c01f-kube-api-access-lg4lr\") pod \"node-ca-t6th8\" (UID: \"ed0fb361-02d3-4a8d-90c6-2c386499c01f\") " pod="openshift-image-registry/node-ca-t6th8" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.267471 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ed0fb361-02d3-4a8d-90c6-2c386499c01f-host\") pod \"node-ca-t6th8\" (UID: \"ed0fb361-02d3-4a8d-90c6-2c386499c01f\") " pod="openshift-image-registry/node-ca-t6th8" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.267567 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ed0fb361-02d3-4a8d-90c6-2c386499c01f-host\") pod \"node-ca-t6th8\" (UID: \"ed0fb361-02d3-4a8d-90c6-2c386499c01f\") " pod="openshift-image-registry/node-ca-t6th8" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.268847 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ed0fb361-02d3-4a8d-90c6-2c386499c01f-serviceca\") pod \"node-ca-t6th8\" (UID: \"ed0fb361-02d3-4a8d-90c6-2c386499c01f\") " pod="openshift-image-registry/node-ca-t6th8" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.284648 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg4lr\" (UniqueName: \"kubernetes.io/projected/ed0fb361-02d3-4a8d-90c6-2c386499c01f-kube-api-access-lg4lr\") pod \"node-ca-t6th8\" (UID: \"ed0fb361-02d3-4a8d-90c6-2c386499c01f\") " pod="openshift-image-registry/node-ca-t6th8" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.333611 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.333652 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.333662 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.333684 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.333697 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:15Z","lastTransitionTime":"2026-01-30T06:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.416376 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-t6th8" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.434993 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.435173 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.435309 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.435393 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.435981 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:15Z","lastTransitionTime":"2026-01-30T06:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.538309 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.538605 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.538616 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.538635 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.538646 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:15Z","lastTransitionTime":"2026-01-30T06:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.641076 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.641103 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.641113 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.641124 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.641139 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:15Z","lastTransitionTime":"2026-01-30T06:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.663379 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 02:03:32.890939495 +0000 UTC Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.743654 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.743694 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.743705 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.743719 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.743729 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:15Z","lastTransitionTime":"2026-01-30T06:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.835066 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-t6th8" event={"ID":"ed0fb361-02d3-4a8d-90c6-2c386499c01f","Type":"ContainerStarted","Data":"3901f212dddc0d99128662fb56e09f6382b60847a630f4da8d2a272ca5064536"} Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.835140 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-t6th8" event={"ID":"ed0fb361-02d3-4a8d-90c6-2c386499c01f","Type":"ContainerStarted","Data":"139303ec19072d3f8abc6a795c11aa17569af18824ff6e1b25a98c1932b7dbbe"} Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.839900 4520 generic.go:334] "Generic (PLEG): container finished" podID="ee18b84b-4e10-42ed-ac93-557943206072" containerID="ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f" exitCode=0 Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.840010 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" event={"ID":"ee18b84b-4e10-42ed-ac93-557943206072","Type":"ContainerDied","Data":"ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f"} Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.845564 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.845591 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.845603 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.845616 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.845627 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:15Z","lastTransitionTime":"2026-01-30T06:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.845672 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" event={"ID":"705f09bd-e1b6-47fd-83db-189fbe9a7b95","Type":"ContainerStarted","Data":"66c8ff098e29674a4d3c48ceeb0d8f2b633c30b3e8074b60688f928a517c47d0"} Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.845967 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.845998 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.849451 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f51275-c0b1-4467-bf4a-ef848e3521df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e259c411b8e91626ab987a1ca449092d507e84f0e06c3cd291b6e8498099a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dkqtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:15Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.863293 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:15Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.871235 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.877113 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.878235 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:15Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.891488 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:15Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.900445 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:15Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.909084 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:15Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.917667 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hf7k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aedbdb4a22aec02ade41b850034115ba0e6b584e2e7195b6ab548ef4291665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqhqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hf7k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:15Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.932382 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:15Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.941676 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:15Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.948850 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.948880 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.948890 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.948906 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.948916 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:15Z","lastTransitionTime":"2026-01-30T06:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.954875 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee18b84b-4e10-42ed-ac93-557943206072\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kdqjc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:15Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.967355 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705f09bd-e1b6-47fd-83db-189fbe9a7b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tm5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:15Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.977557 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:15Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.988579 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mn7g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhvlk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mn7g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:15Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:15 crc kubenswrapper[4520]: I0130 06:45:15.999383 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:15Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.006806 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6th8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed0fb361-02d3-4a8d-90c6-2c386499c01f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3901f212dddc0d99128662fb56e09f6382b60847a630f4da8d2a272ca5064536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6th8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.014859 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.023088 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.031617 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.039427 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.048146 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hf7k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aedbdb4a22aec02ade41b850034115ba0e6b584e2e7195b6ab548ef4291665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqhqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hf7k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.050847 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.050875 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.050886 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.050904 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.050916 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:16Z","lastTransitionTime":"2026-01-30T06:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.058310 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f51275-c0b1-4467-bf4a-ef848e3521df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e259c411b8e91626ab987a1ca449092d507e84f0e06c3cd291b6e8498099a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dkqtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.069608 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.077673 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.086732 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee18b84b-4e10-42ed-ac93-557943206072\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kdqjc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.099351 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705f09bd-e1b6-47fd-83db-189fbe9a7b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c8ff098e29674a4d3c48ceeb0d8f2b633c30b3e8074b60688f928a517c47d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tm5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.112557 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.121227 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.129971 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mn7g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhvlk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mn7g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.137928 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6th8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed0fb361-02d3-4a8d-90c6-2c386499c01f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3901f212dddc0d99128662fb56e09f6382b60847a630f4da8d2a272ca5064536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6th8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.147343 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.152734 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.152771 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.152784 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.152802 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.152812 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:16Z","lastTransitionTime":"2026-01-30T06:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.255597 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.255628 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.255638 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.255656 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.255667 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:16Z","lastTransitionTime":"2026-01-30T06:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.357881 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.357912 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.357921 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.357934 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.357944 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:16Z","lastTransitionTime":"2026-01-30T06:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.459778 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.459811 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.459823 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.459837 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.459846 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:16Z","lastTransitionTime":"2026-01-30T06:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.561990 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.562036 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.562046 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.562066 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.562077 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:16Z","lastTransitionTime":"2026-01-30T06:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.569152 4520 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.663491 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 15:37:18.805692266 +0000 UTC Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.664950 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.664984 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.664994 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.665009 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.665018 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:16Z","lastTransitionTime":"2026-01-30T06:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.685378 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.685425 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.685480 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:16 crc kubenswrapper[4520]: E0130 06:45:16.685630 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:45:16 crc kubenswrapper[4520]: E0130 06:45:16.685762 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:45:16 crc kubenswrapper[4520]: E0130 06:45:16.685813 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.697618 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.705241 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6th8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed0fb361-02d3-4a8d-90c6-2c386499c01f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3901f212dddc0d99128662fb56e09f6382b60847a630f4da8d2a272ca5064536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6th8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.713243 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hf7k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aedbdb4a22aec02ade41b850034115ba0e6b584e2e7195b6ab548ef4291665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqhqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hf7k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.721410 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f51275-c0b1-4467-bf4a-ef848e3521df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e259c411b8e91626ab987a1ca449092d507e84f0e06c3cd291b6e8498099a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dkqtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.730690 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.740118 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.751637 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.764014 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.767227 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.767263 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.767271 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.767288 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.767297 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:16Z","lastTransitionTime":"2026-01-30T06:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.777236 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.794127 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.803951 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.815462 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee18b84b-4e10-42ed-ac93-557943206072\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kdqjc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.829738 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705f09bd-e1b6-47fd-83db-189fbe9a7b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c8ff098e29674a4d3c48ceeb0d8f2b633c30b3e8074b60688f928a517c47d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tm5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.840641 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.852482 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mn7g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhvlk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mn7g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.852800 4520 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.852930 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" event={"ID":"ee18b84b-4e10-42ed-ac93-557943206072","Type":"ContainerStarted","Data":"417284b540e5095c86cbed539b48be5213483a2bc5e7947dd6a148fc6f45e551"} Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.863468 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.869635 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.869674 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.869688 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.869706 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.869720 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:16Z","lastTransitionTime":"2026-01-30T06:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.873612 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mn7g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhvlk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mn7g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.890776 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.900411 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6th8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed0fb361-02d3-4a8d-90c6-2c386499c01f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3901f212dddc0d99128662fb56e09f6382b60847a630f4da8d2a272ca5064536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6th8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.913068 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.924708 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.939788 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.954881 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.969457 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.972619 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.972652 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.972662 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.972677 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.972687 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:16Z","lastTransitionTime":"2026-01-30T06:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.980124 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hf7k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aedbdb4a22aec02ade41b850034115ba0e6b584e2e7195b6ab548ef4291665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqhqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hf7k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:16 crc kubenswrapper[4520]: I0130 06:45:16.999380 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f51275-c0b1-4467-bf4a-ef848e3521df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e259c411b8e91626ab987a1ca449092d507e84f0e06c3cd291b6e8498099a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dkqtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.019217 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:17Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.034112 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:17Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.047844 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee18b84b-4e10-42ed-ac93-557943206072\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417284b540e5095c86cbed539b48be5213483a2bc5e7947dd6a148fc6f45e551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kdqjc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:17Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.064573 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705f09bd-e1b6-47fd-83db-189fbe9a7b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c8ff098e29674a4d3c48ceeb0d8f2b633c30b3e8074b60688f928a517c47d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tm5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:17Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.075615 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.075661 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.075674 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.075698 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.075712 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:17Z","lastTransitionTime":"2026-01-30T06:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.177408 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.177443 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.177452 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.177466 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.177476 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:17Z","lastTransitionTime":"2026-01-30T06:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.279305 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.279351 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.279361 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.279377 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.279387 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:17Z","lastTransitionTime":"2026-01-30T06:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.381673 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.381916 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.381926 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.381940 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.381950 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:17Z","lastTransitionTime":"2026-01-30T06:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.484096 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.484155 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.484166 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.484190 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.484204 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:17Z","lastTransitionTime":"2026-01-30T06:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.494578 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.494620 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.494634 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.494653 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.494664 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:17Z","lastTransitionTime":"2026-01-30T06:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:17 crc kubenswrapper[4520]: E0130 06:45:17.504367 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:17Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.507222 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.507259 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.507270 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.507293 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.507306 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:17Z","lastTransitionTime":"2026-01-30T06:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:17 crc kubenswrapper[4520]: E0130 06:45:17.516685 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:17Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.520054 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.520088 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.520098 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.520113 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.520122 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:17Z","lastTransitionTime":"2026-01-30T06:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:17 crc kubenswrapper[4520]: E0130 06:45:17.529582 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:17Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.532424 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.532451 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.532459 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.532473 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.532484 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:17Z","lastTransitionTime":"2026-01-30T06:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:17 crc kubenswrapper[4520]: E0130 06:45:17.541727 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:17Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.546180 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.546222 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.546231 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.546246 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.546256 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:17Z","lastTransitionTime":"2026-01-30T06:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:17 crc kubenswrapper[4520]: E0130 06:45:17.556618 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:17Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:17 crc kubenswrapper[4520]: E0130 06:45:17.556969 4520 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.586832 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.586873 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.586883 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.586900 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.586912 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:17Z","lastTransitionTime":"2026-01-30T06:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.663797 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 20:15:22.106107271 +0000 UTC Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.689376 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.689417 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.689428 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.689445 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.689459 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:17Z","lastTransitionTime":"2026-01-30T06:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.791994 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.792029 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.792039 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.792058 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.792070 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:17Z","lastTransitionTime":"2026-01-30T06:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.858049 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tm5s_705f09bd-e1b6-47fd-83db-189fbe9a7b95/ovnkube-controller/0.log" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.861325 4520 generic.go:334] "Generic (PLEG): container finished" podID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerID="66c8ff098e29674a4d3c48ceeb0d8f2b633c30b3e8074b60688f928a517c47d0" exitCode=1 Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.861425 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" event={"ID":"705f09bd-e1b6-47fd-83db-189fbe9a7b95","Type":"ContainerDied","Data":"66c8ff098e29674a4d3c48ceeb0d8f2b633c30b3e8074b60688f928a517c47d0"} Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.862171 4520 scope.go:117] "RemoveContainer" containerID="66c8ff098e29674a4d3c48ceeb0d8f2b633c30b3e8074b60688f928a517c47d0" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.877618 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:17Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.887352 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:17Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.893555 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.893651 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.893718 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.893772 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.893821 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:17Z","lastTransitionTime":"2026-01-30T06:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.899367 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee18b84b-4e10-42ed-ac93-557943206072\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417284b540e5095c86cbed539b48be5213483a2bc5e7947dd6a148fc6f45e551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kdqjc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:17Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.914195 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705f09bd-e1b6-47fd-83db-189fbe9a7b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66c8ff098e29674a4d3c48ceeb0d8f2b633c30b3e8074b60688f928a517c47d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c8ff098e29674a4d3c48ceeb0d8f2b633c30b3e8074b60688f928a517c47d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"message\\\":\\\"s.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.34\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.34\\\\\\\", Port:8888, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0130 06:45:17.523621 5720 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager-operator/metrics\\\\\\\"}\\\\nI0130 06:45:17.522272 5720 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0130 06:45:17.523645 5720 services_controller.go:360] Finished syncing service metrics on namespace openshift-kube-controller-manager-operator for network=default : 2.515109ms\\\\nI0130 06:45:17.523660 5720 services_controller.go:356] Processing sync for service openshift-machine-api/control-plane-machine-set-operator for network=default\\\\nF0130 06:45:17.523684 5720 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tm5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:17Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.925552 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:17Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.936091 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mn7g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhvlk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mn7g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:17Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.945876 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:17Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.954487 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6th8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed0fb361-02d3-4a8d-90c6-2c386499c01f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3901f212dddc0d99128662fb56e09f6382b60847a630f4da8d2a272ca5064536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6th8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:17Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.967875 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:17Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.978537 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:17Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.988742 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:17Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.996301 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.996331 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.996342 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.996361 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:17 crc kubenswrapper[4520]: I0130 06:45:17.996373 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:17Z","lastTransitionTime":"2026-01-30T06:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.001319 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:17Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.013412 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:18Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.023534 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hf7k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aedbdb4a22aec02ade41b850034115ba0e6b584e2e7195b6ab548ef4291665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqhqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hf7k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:18Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.032448 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f51275-c0b1-4467-bf4a-ef848e3521df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e259c411b8e91626ab987a1ca449092d507e84f0e06c3cd291b6e8498099a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dkqtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:18Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.098114 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.098147 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.098159 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.098174 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.098187 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:18Z","lastTransitionTime":"2026-01-30T06:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.200941 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.201010 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.201022 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.201048 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.201073 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:18Z","lastTransitionTime":"2026-01-30T06:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.303367 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.303412 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.303424 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.303439 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.303453 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:18Z","lastTransitionTime":"2026-01-30T06:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.405261 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.405308 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.405320 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.405335 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.405349 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:18Z","lastTransitionTime":"2026-01-30T06:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.508009 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.508058 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.508072 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.508096 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.508109 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:18Z","lastTransitionTime":"2026-01-30T06:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.610206 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.610260 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.610269 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.610286 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.610297 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:18Z","lastTransitionTime":"2026-01-30T06:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.664669 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 05:19:47.405675155 +0000 UTC Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.684975 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.684998 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:45:18 crc kubenswrapper[4520]: E0130 06:45:18.685104 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.685150 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:18 crc kubenswrapper[4520]: E0130 06:45:18.685270 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:45:18 crc kubenswrapper[4520]: E0130 06:45:18.685333 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.712305 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.712339 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.712349 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.712361 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.712371 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:18Z","lastTransitionTime":"2026-01-30T06:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.814578 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.814631 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.814641 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.814655 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.814667 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:18Z","lastTransitionTime":"2026-01-30T06:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.866326 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tm5s_705f09bd-e1b6-47fd-83db-189fbe9a7b95/ovnkube-controller/1.log" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.866965 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tm5s_705f09bd-e1b6-47fd-83db-189fbe9a7b95/ovnkube-controller/0.log" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.870923 4520 generic.go:334] "Generic (PLEG): container finished" podID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerID="0b7ae62b9399f287aa8884a9a8a3251f58032f7e21807cba84609c063ba525cf" exitCode=1 Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.870964 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" event={"ID":"705f09bd-e1b6-47fd-83db-189fbe9a7b95","Type":"ContainerDied","Data":"0b7ae62b9399f287aa8884a9a8a3251f58032f7e21807cba84609c063ba525cf"} Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.871010 4520 scope.go:117] "RemoveContainer" containerID="66c8ff098e29674a4d3c48ceeb0d8f2b633c30b3e8074b60688f928a517c47d0" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.871714 4520 scope.go:117] "RemoveContainer" containerID="0b7ae62b9399f287aa8884a9a8a3251f58032f7e21807cba84609c063ba525cf" Jan 30 06:45:18 crc kubenswrapper[4520]: E0130 06:45:18.871929 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6tm5s_openshift-ovn-kubernetes(705f09bd-e1b6-47fd-83db-189fbe9a7b95)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.884494 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:18Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.894017 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hf7k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aedbdb4a22aec02ade41b850034115ba0e6b584e2e7195b6ab548ef4291665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqhqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hf7k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:18Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.903503 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f51275-c0b1-4467-bf4a-ef848e3521df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e259c411b8e91626ab987a1ca449092d507e84f0e06c3cd291b6e8498099a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dkqtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:18Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.913919 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:18Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.916331 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.916360 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.916373 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.916390 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.916403 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:18Z","lastTransitionTime":"2026-01-30T06:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.925507 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:18Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.936297 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:18Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.945443 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:18Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.961199 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:18Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.971957 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:18Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.983801 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee18b84b-4e10-42ed-ac93-557943206072\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417284b540e5095c86cbed539b48be5213483a2bc5e7947dd6a148fc6f45e551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kdqjc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:18Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:18 crc kubenswrapper[4520]: I0130 06:45:18.997475 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705f09bd-e1b6-47fd-83db-189fbe9a7b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b7ae62b9399f287aa8884a9a8a3251f58032f7e21807cba84609c063ba525cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c8ff098e29674a4d3c48ceeb0d8f2b633c30b3e8074b60688f928a517c47d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T06:45:17Z\\\",\\\"message\\\":\\\"s.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.34\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.34\\\\\\\", Port:8888, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0130 06:45:17.523621 5720 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager-operator/metrics\\\\\\\"}\\\\nI0130 06:45:17.522272 5720 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0130 06:45:17.523645 5720 services_controller.go:360] Finished syncing service metrics on namespace openshift-kube-controller-manager-operator for network=default : 2.515109ms\\\\nI0130 06:45:17.523660 5720 services_controller.go:356] Processing sync for service openshift-machine-api/control-plane-machine-set-operator for network=default\\\\nF0130 06:45:17.523684 5720 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b7ae62b9399f287aa8884a9a8a3251f58032f7e21807cba84609c063ba525cf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T06:45:18Z\\\",\\\"message\\\":\\\"onfig-daemon-dkqtt\\\\nI0130 06:45:18.541222 5873 services_controller.go:443] Built service openshift-operator-lifecycle-manager/packageserver-service LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.153\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:5443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF0130 06:45:18.541224 5873 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:18Z is after 2025-08-24T17:21:41Z]\\\\nI0130 06:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tm5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:18Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.007980 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:19Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.018553 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mn7g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhvlk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mn7g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:19Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.018750 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.018786 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.018800 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.018826 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.018840 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:19Z","lastTransitionTime":"2026-01-30T06:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.029941 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:19Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.038601 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6th8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed0fb361-02d3-4a8d-90c6-2c386499c01f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3901f212dddc0d99128662fb56e09f6382b60847a630f4da8d2a272ca5064536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6th8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:19Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.121290 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.121413 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.121489 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.121583 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.121660 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:19Z","lastTransitionTime":"2026-01-30T06:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.222992 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.223034 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.223047 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.223067 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.223083 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:19Z","lastTransitionTime":"2026-01-30T06:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.325431 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.325587 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.325699 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.325792 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.325875 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:19Z","lastTransitionTime":"2026-01-30T06:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.427955 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.427997 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.428007 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.428021 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.428030 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:19Z","lastTransitionTime":"2026-01-30T06:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.529807 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.529851 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.529865 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.529887 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.529901 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:19Z","lastTransitionTime":"2026-01-30T06:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.632656 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.632701 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.632712 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.632742 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.632758 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:19Z","lastTransitionTime":"2026-01-30T06:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.665608 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 22:15:06.984162574 +0000 UTC Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.734777 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.734810 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.734825 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.734836 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.734846 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:19Z","lastTransitionTime":"2026-01-30T06:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.836937 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.836978 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.836993 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.837009 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.837019 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:19Z","lastTransitionTime":"2026-01-30T06:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.876277 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tm5s_705f09bd-e1b6-47fd-83db-189fbe9a7b95/ovnkube-controller/1.log" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.880002 4520 scope.go:117] "RemoveContainer" containerID="0b7ae62b9399f287aa8884a9a8a3251f58032f7e21807cba84609c063ba525cf" Jan 30 06:45:19 crc kubenswrapper[4520]: E0130 06:45:19.880182 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6tm5s_openshift-ovn-kubernetes(705f09bd-e1b6-47fd-83db-189fbe9a7b95)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.889109 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:19Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.899663 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:19Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.908803 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:19Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.918190 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:19Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.926483 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hf7k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aedbdb4a22aec02ade41b850034115ba0e6b584e2e7195b6ab548ef4291665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqhqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hf7k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:19Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.934937 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f51275-c0b1-4467-bf4a-ef848e3521df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e259c411b8e91626ab987a1ca449092d507e84f0e06c3cd291b6e8498099a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dkqtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:19Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.938537 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.938572 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.938583 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.938599 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.938610 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:19Z","lastTransitionTime":"2026-01-30T06:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.945443 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:19Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.957417 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:19Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.968094 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee18b84b-4e10-42ed-ac93-557943206072\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417284b540e5095c86cbed539b48be5213483a2bc5e7947dd6a148fc6f45e551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kdqjc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:19Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.982963 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705f09bd-e1b6-47fd-83db-189fbe9a7b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b7ae62b9399f287aa8884a9a8a3251f58032f7e21807cba84609c063ba525cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b7ae62b9399f287aa8884a9a8a3251f58032f7e21807cba84609c063ba525cf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T06:45:18Z\\\",\\\"message\\\":\\\"onfig-daemon-dkqtt\\\\nI0130 06:45:18.541222 5873 services_controller.go:443] Built service openshift-operator-lifecycle-manager/packageserver-service LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.153\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:5443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF0130 06:45:18.541224 5873 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:18Z is after 2025-08-24T17:21:41Z]\\\\nI0130 06:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6tm5s_openshift-ovn-kubernetes(705f09bd-e1b6-47fd-83db-189fbe9a7b95)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tm5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:19Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:19 crc kubenswrapper[4520]: I0130 06:45:19.998143 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:19Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.008063 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:20Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.019279 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mn7g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhvlk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mn7g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:20Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.027007 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6th8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed0fb361-02d3-4a8d-90c6-2c386499c01f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3901f212dddc0d99128662fb56e09f6382b60847a630f4da8d2a272ca5064536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6th8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:20Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.036276 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:20Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.040473 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.040501 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.040510 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.040537 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.040547 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:20Z","lastTransitionTime":"2026-01-30T06:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.142485 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.142579 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.142587 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.142601 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.142609 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:20Z","lastTransitionTime":"2026-01-30T06:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.244668 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.244770 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.244827 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.244878 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.244923 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:20Z","lastTransitionTime":"2026-01-30T06:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.318796 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:45:20 crc kubenswrapper[4520]: E0130 06:45:20.318991 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:45:36.318971394 +0000 UTC m=+49.947323564 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.347812 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.347953 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.348017 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.348093 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.348156 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:20Z","lastTransitionTime":"2026-01-30T06:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.419543 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.419579 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.419603 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.419625 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:45:20 crc kubenswrapper[4520]: E0130 06:45:20.419761 4520 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 06:45:20 crc kubenswrapper[4520]: E0130 06:45:20.419782 4520 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 06:45:20 crc kubenswrapper[4520]: E0130 06:45:20.419793 4520 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 06:45:20 crc kubenswrapper[4520]: E0130 06:45:20.419832 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 06:45:36.419820106 +0000 UTC m=+50.048172287 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 06:45:20 crc kubenswrapper[4520]: E0130 06:45:20.419888 4520 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 06:45:20 crc kubenswrapper[4520]: E0130 06:45:20.419941 4520 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 06:45:20 crc kubenswrapper[4520]: E0130 06:45:20.419968 4520 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 06:45:20 crc kubenswrapper[4520]: E0130 06:45:20.419981 4520 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 06:45:20 crc kubenswrapper[4520]: E0130 06:45:20.419945 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 06:45:36.419931485 +0000 UTC m=+50.048283666 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 06:45:20 crc kubenswrapper[4520]: E0130 06:45:20.420035 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 06:45:36.420022326 +0000 UTC m=+50.048374508 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 06:45:20 crc kubenswrapper[4520]: E0130 06:45:20.420161 4520 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 06:45:20 crc kubenswrapper[4520]: E0130 06:45:20.420255 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 06:45:36.420246227 +0000 UTC m=+50.048598408 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.449608 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.449638 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.449650 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.449665 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.449677 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:20Z","lastTransitionTime":"2026-01-30T06:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.551206 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.551255 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.551266 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.551279 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.551289 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:20Z","lastTransitionTime":"2026-01-30T06:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.652585 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.652613 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.652621 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.652630 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.652638 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:20Z","lastTransitionTime":"2026-01-30T06:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.666037 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 08:03:32.543120385 +0000 UTC Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.685436 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:45:20 crc kubenswrapper[4520]: E0130 06:45:20.685562 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.685448 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:20 crc kubenswrapper[4520]: E0130 06:45:20.685655 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.685731 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:45:20 crc kubenswrapper[4520]: E0130 06:45:20.685937 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.754602 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.754627 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.754635 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.754647 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.754654 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:20Z","lastTransitionTime":"2026-01-30T06:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.794815 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tkcc8"] Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.795188 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tkcc8" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.796579 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.796582 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.809299 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:20Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.819009 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:20Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.828937 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee18b84b-4e10-42ed-ac93-557943206072\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417284b540e5095c86cbed539b48be5213483a2bc5e7947dd6a148fc6f45e551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kdqjc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:20Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.842304 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705f09bd-e1b6-47fd-83db-189fbe9a7b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b7ae62b9399f287aa8884a9a8a3251f58032f7e21807cba84609c063ba525cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b7ae62b9399f287aa8884a9a8a3251f58032f7e21807cba84609c063ba525cf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T06:45:18Z\\\",\\\"message\\\":\\\"onfig-daemon-dkqtt\\\\nI0130 06:45:18.541222 5873 services_controller.go:443] Built service openshift-operator-lifecycle-manager/packageserver-service LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.153\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:5443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF0130 06:45:18.541224 5873 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:18Z is after 2025-08-24T17:21:41Z]\\\\nI0130 06:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6tm5s_openshift-ovn-kubernetes(705f09bd-e1b6-47fd-83db-189fbe9a7b95)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tm5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:20Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.850485 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tkcc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d0da278-9de0-4cfe-8f2b-b15ce7445923\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwgkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwgkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tkcc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:20Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.857299 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.857329 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.857339 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.857355 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.857367 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:20Z","lastTransitionTime":"2026-01-30T06:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.860843 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:20Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.869608 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mn7g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhvlk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mn7g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:20Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.879657 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:20Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.886754 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6th8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed0fb361-02d3-4a8d-90c6-2c386499c01f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3901f212dddc0d99128662fb56e09f6382b60847a630f4da8d2a272ca5064536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6th8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:20Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.898894 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:20Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.912401 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:20Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.921604 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:20Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.922995 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3d0da278-9de0-4cfe-8f2b-b15ce7445923-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tkcc8\" (UID: \"3d0da278-9de0-4cfe-8f2b-b15ce7445923\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tkcc8" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.923034 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3d0da278-9de0-4cfe-8f2b-b15ce7445923-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tkcc8\" (UID: \"3d0da278-9de0-4cfe-8f2b-b15ce7445923\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tkcc8" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.923087 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3d0da278-9de0-4cfe-8f2b-b15ce7445923-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tkcc8\" (UID: \"3d0da278-9de0-4cfe-8f2b-b15ce7445923\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tkcc8" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.923109 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwgkb\" (UniqueName: \"kubernetes.io/projected/3d0da278-9de0-4cfe-8f2b-b15ce7445923-kube-api-access-pwgkb\") pod \"ovnkube-control-plane-749d76644c-tkcc8\" (UID: \"3d0da278-9de0-4cfe-8f2b-b15ce7445923\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tkcc8" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.931423 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:20Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.940154 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:20Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.947314 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hf7k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aedbdb4a22aec02ade41b850034115ba0e6b584e2e7195b6ab548ef4291665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqhqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hf7k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:20Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.955868 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f51275-c0b1-4467-bf4a-ef848e3521df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e259c411b8e91626ab987a1ca449092d507e84f0e06c3cd291b6e8498099a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dkqtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:20Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.959859 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.959888 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.959900 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.959918 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:20 crc kubenswrapper[4520]: I0130 06:45:20.959932 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:20Z","lastTransitionTime":"2026-01-30T06:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.023934 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3d0da278-9de0-4cfe-8f2b-b15ce7445923-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tkcc8\" (UID: \"3d0da278-9de0-4cfe-8f2b-b15ce7445923\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tkcc8" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.024054 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwgkb\" (UniqueName: \"kubernetes.io/projected/3d0da278-9de0-4cfe-8f2b-b15ce7445923-kube-api-access-pwgkb\") pod \"ovnkube-control-plane-749d76644c-tkcc8\" (UID: \"3d0da278-9de0-4cfe-8f2b-b15ce7445923\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tkcc8" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.024151 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3d0da278-9de0-4cfe-8f2b-b15ce7445923-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tkcc8\" (UID: \"3d0da278-9de0-4cfe-8f2b-b15ce7445923\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tkcc8" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.024237 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3d0da278-9de0-4cfe-8f2b-b15ce7445923-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tkcc8\" (UID: \"3d0da278-9de0-4cfe-8f2b-b15ce7445923\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tkcc8" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.024943 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3d0da278-9de0-4cfe-8f2b-b15ce7445923-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tkcc8\" (UID: \"3d0da278-9de0-4cfe-8f2b-b15ce7445923\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tkcc8" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.024982 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3d0da278-9de0-4cfe-8f2b-b15ce7445923-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tkcc8\" (UID: \"3d0da278-9de0-4cfe-8f2b-b15ce7445923\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tkcc8" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.030935 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3d0da278-9de0-4cfe-8f2b-b15ce7445923-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tkcc8\" (UID: \"3d0da278-9de0-4cfe-8f2b-b15ce7445923\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tkcc8" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.040072 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwgkb\" (UniqueName: \"kubernetes.io/projected/3d0da278-9de0-4cfe-8f2b-b15ce7445923-kube-api-access-pwgkb\") pod \"ovnkube-control-plane-749d76644c-tkcc8\" (UID: \"3d0da278-9de0-4cfe-8f2b-b15ce7445923\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tkcc8" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.062320 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.062362 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.062372 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.062393 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.062404 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:21Z","lastTransitionTime":"2026-01-30T06:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.105167 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tkcc8" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.164853 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.164887 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.164899 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.164916 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.164927 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:21Z","lastTransitionTime":"2026-01-30T06:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.266827 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.266902 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.266938 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.266955 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.266966 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:21Z","lastTransitionTime":"2026-01-30T06:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.369157 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.369227 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.369240 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.369265 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.369284 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:21Z","lastTransitionTime":"2026-01-30T06:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.471398 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.471441 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.471453 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.471469 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.471479 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:21Z","lastTransitionTime":"2026-01-30T06:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.573282 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.573321 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.573330 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.573346 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.573357 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:21Z","lastTransitionTime":"2026-01-30T06:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.666461 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 10:43:53.921357631 +0000 UTC Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.675312 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.675357 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.675368 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.675386 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.675396 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:21Z","lastTransitionTime":"2026-01-30T06:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.777217 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.777266 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.777276 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.777293 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.777302 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:21Z","lastTransitionTime":"2026-01-30T06:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.879264 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.879310 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.879322 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.879339 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.879353 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:21Z","lastTransitionTime":"2026-01-30T06:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.885964 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tkcc8" event={"ID":"3d0da278-9de0-4cfe-8f2b-b15ce7445923","Type":"ContainerStarted","Data":"fc3e82fc5b1455769c2618e3e32f21d800d7f6d510cd344068dc3ac90ccb6a4c"} Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.886025 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tkcc8" event={"ID":"3d0da278-9de0-4cfe-8f2b-b15ce7445923","Type":"ContainerStarted","Data":"33144075cc4b12176da829bf3fa8f8d11b6e56fae342a4cc12e28f2a83268cb5"} Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.886044 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tkcc8" event={"ID":"3d0da278-9de0-4cfe-8f2b-b15ce7445923","Type":"ContainerStarted","Data":"7f6cf6a565fcf938454e51cd2e1620d762d3eb1e596b316d908249f5e5d82721"} Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.898408 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:21Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.907447 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mn7g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhvlk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mn7g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:21Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.914297 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6th8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed0fb361-02d3-4a8d-90c6-2c386499c01f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3901f212dddc0d99128662fb56e09f6382b60847a630f4da8d2a272ca5064536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6th8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:21Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.923500 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:21Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.931768 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:21Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.940743 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:21Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.950148 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:21Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.959916 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:21Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.967375 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hf7k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aedbdb4a22aec02ade41b850034115ba0e6b584e2e7195b6ab548ef4291665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqhqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hf7k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:21Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.979386 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f51275-c0b1-4467-bf4a-ef848e3521df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e259c411b8e91626ab987a1ca449092d507e84f0e06c3cd291b6e8498099a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dkqtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:21Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.980824 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.980857 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.980868 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.980887 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.980897 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:21Z","lastTransitionTime":"2026-01-30T06:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:21 crc kubenswrapper[4520]: I0130 06:45:21.998811 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:21Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.037314 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:22Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.053613 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee18b84b-4e10-42ed-ac93-557943206072\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417284b540e5095c86cbed539b48be5213483a2bc5e7947dd6a148fc6f45e551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kdqjc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:22Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.069739 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705f09bd-e1b6-47fd-83db-189fbe9a7b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b7ae62b9399f287aa8884a9a8a3251f58032f7e21807cba84609c063ba525cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b7ae62b9399f287aa8884a9a8a3251f58032f7e21807cba84609c063ba525cf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T06:45:18Z\\\",\\\"message\\\":\\\"onfig-daemon-dkqtt\\\\nI0130 06:45:18.541222 5873 services_controller.go:443] Built service openshift-operator-lifecycle-manager/packageserver-service LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.153\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:5443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF0130 06:45:18.541224 5873 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:18Z is after 2025-08-24T17:21:41Z]\\\\nI0130 06:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6tm5s_openshift-ovn-kubernetes(705f09bd-e1b6-47fd-83db-189fbe9a7b95)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tm5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:22Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.079119 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tkcc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d0da278-9de0-4cfe-8f2b-b15ce7445923\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33144075cc4b12176da829bf3fa8f8d11b6e56fae342a4cc12e28f2a83268cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwgkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc3e82fc5b1455769c2618e3e32f21d800d7f6d510cd344068dc3ac90ccb6a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwgkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tkcc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:22Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.086352 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.086390 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.086398 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.086410 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.086418 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:22Z","lastTransitionTime":"2026-01-30T06:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.093536 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:22Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.188613 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.188637 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.188646 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.188657 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.188664 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:22Z","lastTransitionTime":"2026-01-30T06:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.290548 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.290573 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.290582 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.290592 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.290599 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:22Z","lastTransitionTime":"2026-01-30T06:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.392611 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.392654 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.392671 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.392689 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.392699 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:22Z","lastTransitionTime":"2026-01-30T06:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.494296 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.494342 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.494357 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.494369 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.494381 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:22Z","lastTransitionTime":"2026-01-30T06:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.558138 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-z5rcx"] Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.558924 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:45:22 crc kubenswrapper[4520]: E0130 06:45:22.559037 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.568702 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6th8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed0fb361-02d3-4a8d-90c6-2c386499c01f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3901f212dddc0d99128662fb56e09f6382b60847a630f4da8d2a272ca5064536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6th8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:22Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.577887 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:22Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.586430 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:22Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.595079 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:22Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.596355 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.596383 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.596391 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.596404 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.596411 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:22Z","lastTransitionTime":"2026-01-30T06:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.604626 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:22Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.612533 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:22Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.620200 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hf7k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aedbdb4a22aec02ade41b850034115ba0e6b584e2e7195b6ab548ef4291665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqhqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hf7k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:22Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.628583 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f51275-c0b1-4467-bf4a-ef848e3521df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e259c411b8e91626ab987a1ca449092d507e84f0e06c3cd291b6e8498099a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dkqtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:22Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.636915 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:22Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.638170 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bdr6\" (UniqueName: \"kubernetes.io/projected/6e1a8ebe-5163-47dd-a320-a286c92971c2-kube-api-access-2bdr6\") pod \"network-metrics-daemon-z5rcx\" (UID: \"6e1a8ebe-5163-47dd-a320-a286c92971c2\") " pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.638207 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e1a8ebe-5163-47dd-a320-a286c92971c2-metrics-certs\") pod \"network-metrics-daemon-z5rcx\" (UID: \"6e1a8ebe-5163-47dd-a320-a286c92971c2\") " pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.645672 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:22Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.656084 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee18b84b-4e10-42ed-ac93-557943206072\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417284b540e5095c86cbed539b48be5213483a2bc5e7947dd6a148fc6f45e551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kdqjc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:22Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.667338 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 05:16:13.757918292 +0000 UTC Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.669607 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705f09bd-e1b6-47fd-83db-189fbe9a7b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b7ae62b9399f287aa8884a9a8a3251f58032f7e21807cba84609c063ba525cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b7ae62b9399f287aa8884a9a8a3251f58032f7e21807cba84609c063ba525cf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T06:45:18Z\\\",\\\"message\\\":\\\"onfig-daemon-dkqtt\\\\nI0130 06:45:18.541222 5873 services_controller.go:443] Built service openshift-operator-lifecycle-manager/packageserver-service LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.153\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:5443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF0130 06:45:18.541224 5873 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:18Z is after 2025-08-24T17:21:41Z]\\\\nI0130 06:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6tm5s_openshift-ovn-kubernetes(705f09bd-e1b6-47fd-83db-189fbe9a7b95)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tm5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:22Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.678429 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tkcc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d0da278-9de0-4cfe-8f2b-b15ce7445923\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33144075cc4b12176da829bf3fa8f8d11b6e56fae342a4cc12e28f2a83268cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwgkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc3e82fc5b1455769c2618e3e32f21d800d7f6d510cd344068dc3ac90ccb6a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwgkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tkcc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:22Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.684891 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.684949 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:45:22 crc kubenswrapper[4520]: E0130 06:45:22.684990 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:45:22 crc kubenswrapper[4520]: E0130 06:45:22.685052 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.685281 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:45:22 crc kubenswrapper[4520]: E0130 06:45:22.685363 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.694251 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:22Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.697954 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.697979 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.697988 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.697998 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.698005 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:22Z","lastTransitionTime":"2026-01-30T06:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.702896 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:22Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.711583 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mn7g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhvlk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mn7g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:22Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.720612 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z5rcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e1a8ebe-5163-47dd-a320-a286c92971c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bdr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bdr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z5rcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:22Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.738960 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bdr6\" (UniqueName: \"kubernetes.io/projected/6e1a8ebe-5163-47dd-a320-a286c92971c2-kube-api-access-2bdr6\") pod \"network-metrics-daemon-z5rcx\" (UID: \"6e1a8ebe-5163-47dd-a320-a286c92971c2\") " pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.738989 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e1a8ebe-5163-47dd-a320-a286c92971c2-metrics-certs\") pod \"network-metrics-daemon-z5rcx\" (UID: \"6e1a8ebe-5163-47dd-a320-a286c92971c2\") " pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:45:22 crc kubenswrapper[4520]: E0130 06:45:22.739082 4520 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 06:45:22 crc kubenswrapper[4520]: E0130 06:45:22.739127 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e1a8ebe-5163-47dd-a320-a286c92971c2-metrics-certs podName:6e1a8ebe-5163-47dd-a320-a286c92971c2 nodeName:}" failed. No retries permitted until 2026-01-30 06:45:23.239114384 +0000 UTC m=+36.867466565 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6e1a8ebe-5163-47dd-a320-a286c92971c2-metrics-certs") pod "network-metrics-daemon-z5rcx" (UID: "6e1a8ebe-5163-47dd-a320-a286c92971c2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.752191 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bdr6\" (UniqueName: \"kubernetes.io/projected/6e1a8ebe-5163-47dd-a320-a286c92971c2-kube-api-access-2bdr6\") pod \"network-metrics-daemon-z5rcx\" (UID: \"6e1a8ebe-5163-47dd-a320-a286c92971c2\") " pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.800609 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.800643 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.800654 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.800668 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.800679 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:22Z","lastTransitionTime":"2026-01-30T06:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.904013 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.904047 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.904069 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.904084 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:22 crc kubenswrapper[4520]: I0130 06:45:22.904096 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:22Z","lastTransitionTime":"2026-01-30T06:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.006338 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.006423 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.006436 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.006456 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.006468 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:23Z","lastTransitionTime":"2026-01-30T06:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.108896 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.108940 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.108949 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.108963 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.108972 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:23Z","lastTransitionTime":"2026-01-30T06:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.211012 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.211041 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.211048 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.211060 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.211069 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:23Z","lastTransitionTime":"2026-01-30T06:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.244750 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e1a8ebe-5163-47dd-a320-a286c92971c2-metrics-certs\") pod \"network-metrics-daemon-z5rcx\" (UID: \"6e1a8ebe-5163-47dd-a320-a286c92971c2\") " pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:45:23 crc kubenswrapper[4520]: E0130 06:45:23.244867 4520 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 06:45:23 crc kubenswrapper[4520]: E0130 06:45:23.244916 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e1a8ebe-5163-47dd-a320-a286c92971c2-metrics-certs podName:6e1a8ebe-5163-47dd-a320-a286c92971c2 nodeName:}" failed. No retries permitted until 2026-01-30 06:45:24.244902757 +0000 UTC m=+37.873254928 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6e1a8ebe-5163-47dd-a320-a286c92971c2-metrics-certs") pod "network-metrics-daemon-z5rcx" (UID: "6e1a8ebe-5163-47dd-a320-a286c92971c2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.312652 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.312678 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.312687 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.312699 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.312716 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:23Z","lastTransitionTime":"2026-01-30T06:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.418334 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.418367 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.418376 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.418389 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.418399 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:23Z","lastTransitionTime":"2026-01-30T06:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.520960 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.520983 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.520991 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.521006 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.521015 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:23Z","lastTransitionTime":"2026-01-30T06:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.622869 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.622893 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.622902 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.622915 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.622923 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:23Z","lastTransitionTime":"2026-01-30T06:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.667781 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 18:30:09.453709757 +0000 UTC Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.727511 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.727564 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.727573 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.727587 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.727596 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:23Z","lastTransitionTime":"2026-01-30T06:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.830015 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.830050 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.830058 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.830069 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.830078 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:23Z","lastTransitionTime":"2026-01-30T06:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.932451 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.932495 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.932505 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.932547 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:23 crc kubenswrapper[4520]: I0130 06:45:23.932557 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:23Z","lastTransitionTime":"2026-01-30T06:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.034642 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.034673 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.034682 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.034694 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.034705 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:24Z","lastTransitionTime":"2026-01-30T06:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.137177 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.137209 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.137218 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.137240 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.137250 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:24Z","lastTransitionTime":"2026-01-30T06:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.238954 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.238991 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.239000 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.239009 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.239016 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:24Z","lastTransitionTime":"2026-01-30T06:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.252807 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e1a8ebe-5163-47dd-a320-a286c92971c2-metrics-certs\") pod \"network-metrics-daemon-z5rcx\" (UID: \"6e1a8ebe-5163-47dd-a320-a286c92971c2\") " pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:45:24 crc kubenswrapper[4520]: E0130 06:45:24.252949 4520 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 06:45:24 crc kubenswrapper[4520]: E0130 06:45:24.253018 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e1a8ebe-5163-47dd-a320-a286c92971c2-metrics-certs podName:6e1a8ebe-5163-47dd-a320-a286c92971c2 nodeName:}" failed. No retries permitted until 2026-01-30 06:45:26.253000785 +0000 UTC m=+39.881352966 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6e1a8ebe-5163-47dd-a320-a286c92971c2-metrics-certs") pod "network-metrics-daemon-z5rcx" (UID: "6e1a8ebe-5163-47dd-a320-a286c92971c2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.341009 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.341031 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.341042 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.341053 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.341064 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:24Z","lastTransitionTime":"2026-01-30T06:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.443214 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.443243 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.443254 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.443263 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.443270 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:24Z","lastTransitionTime":"2026-01-30T06:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.545354 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.545383 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.545393 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.545405 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.545411 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:24Z","lastTransitionTime":"2026-01-30T06:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.647111 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.647138 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.647147 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.647156 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.647165 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:24Z","lastTransitionTime":"2026-01-30T06:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.668593 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 02:25:52.54814711 +0000 UTC Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.684985 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.684992 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:24 crc kubenswrapper[4520]: E0130 06:45:24.685345 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.685057 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:45:24 crc kubenswrapper[4520]: E0130 06:45:24.685431 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.685018 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:45:24 crc kubenswrapper[4520]: E0130 06:45:24.685224 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:45:24 crc kubenswrapper[4520]: E0130 06:45:24.685486 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.749146 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.749268 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.749359 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.749436 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.749494 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:24Z","lastTransitionTime":"2026-01-30T06:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.850654 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.850683 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.850692 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.850705 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.850714 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:24Z","lastTransitionTime":"2026-01-30T06:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.952452 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.952485 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.952495 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.952507 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:24 crc kubenswrapper[4520]: I0130 06:45:24.952532 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:24Z","lastTransitionTime":"2026-01-30T06:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.054671 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.054706 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.054714 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.054724 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.054733 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:25Z","lastTransitionTime":"2026-01-30T06:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.156254 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.156281 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.156290 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.156302 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.156310 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:25Z","lastTransitionTime":"2026-01-30T06:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.257892 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.257929 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.257939 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.257947 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.257955 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:25Z","lastTransitionTime":"2026-01-30T06:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.359577 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.359617 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.359628 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.359639 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.359648 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:25Z","lastTransitionTime":"2026-01-30T06:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.461671 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.461701 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.461708 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.461718 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.461725 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:25Z","lastTransitionTime":"2026-01-30T06:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.563528 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.563566 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.563578 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.563593 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.563604 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:25Z","lastTransitionTime":"2026-01-30T06:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.665714 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.665764 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.665774 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.665796 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.665809 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:25Z","lastTransitionTime":"2026-01-30T06:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.668887 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 10:19:48.541490856 +0000 UTC Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.768046 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.768077 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.768086 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.768100 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.768111 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:25Z","lastTransitionTime":"2026-01-30T06:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.869944 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.869975 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.869984 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.869995 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.870022 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:25Z","lastTransitionTime":"2026-01-30T06:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.971777 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.971823 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.971832 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.971843 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:25 crc kubenswrapper[4520]: I0130 06:45:25.971850 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:25Z","lastTransitionTime":"2026-01-30T06:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.073909 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.073943 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.073952 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.073962 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.073970 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:26Z","lastTransitionTime":"2026-01-30T06:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.176155 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.176200 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.176210 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.176221 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.176230 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:26Z","lastTransitionTime":"2026-01-30T06:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.270284 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e1a8ebe-5163-47dd-a320-a286c92971c2-metrics-certs\") pod \"network-metrics-daemon-z5rcx\" (UID: \"6e1a8ebe-5163-47dd-a320-a286c92971c2\") " pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:45:26 crc kubenswrapper[4520]: E0130 06:45:26.270419 4520 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 06:45:26 crc kubenswrapper[4520]: E0130 06:45:26.270484 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e1a8ebe-5163-47dd-a320-a286c92971c2-metrics-certs podName:6e1a8ebe-5163-47dd-a320-a286c92971c2 nodeName:}" failed. No retries permitted until 2026-01-30 06:45:30.270465275 +0000 UTC m=+43.898817456 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6e1a8ebe-5163-47dd-a320-a286c92971c2-metrics-certs") pod "network-metrics-daemon-z5rcx" (UID: "6e1a8ebe-5163-47dd-a320-a286c92971c2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.277779 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.277806 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.277817 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.277834 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.277845 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:26Z","lastTransitionTime":"2026-01-30T06:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.379902 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.379931 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.379941 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.379954 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.379965 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:26Z","lastTransitionTime":"2026-01-30T06:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.482222 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.482288 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.482301 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.482324 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.482338 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:26Z","lastTransitionTime":"2026-01-30T06:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.584395 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.584422 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.584430 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.584458 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.584466 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:26Z","lastTransitionTime":"2026-01-30T06:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.669264 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 19:28:01.998887872 +0000 UTC Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.685843 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:45:26 crc kubenswrapper[4520]: E0130 06:45:26.685999 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.686066 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.686078 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.686115 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:45:26 crc kubenswrapper[4520]: E0130 06:45:26.686146 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:45:26 crc kubenswrapper[4520]: E0130 06:45:26.686219 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.686301 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.686344 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.686357 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.686369 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.686377 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:26Z","lastTransitionTime":"2026-01-30T06:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:26 crc kubenswrapper[4520]: E0130 06:45:26.686609 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.699255 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:26Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.707857 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6th8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed0fb361-02d3-4a8d-90c6-2c386499c01f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3901f212dddc0d99128662fb56e09f6382b60847a630f4da8d2a272ca5064536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6th8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:26Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.717352 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:26Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.725374 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:26Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.732698 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hf7k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aedbdb4a22aec02ade41b850034115ba0e6b584e2e7195b6ab548ef4291665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqhqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hf7k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:26Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.740658 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f51275-c0b1-4467-bf4a-ef848e3521df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e259c411b8e91626ab987a1ca449092d507e84f0e06c3cd291b6e8498099a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dkqtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:26Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.751267 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:26Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.760958 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:26Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.771290 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:26Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.788602 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.788689 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.788755 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.788818 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.788918 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:26Z","lastTransitionTime":"2026-01-30T06:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.792996 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705f09bd-e1b6-47fd-83db-189fbe9a7b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b7ae62b9399f287aa8884a9a8a3251f58032f7e21807cba84609c063ba525cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b7ae62b9399f287aa8884a9a8a3251f58032f7e21807cba84609c063ba525cf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T06:45:18Z\\\",\\\"message\\\":\\\"onfig-daemon-dkqtt\\\\nI0130 06:45:18.541222 5873 services_controller.go:443] Built service openshift-operator-lifecycle-manager/packageserver-service LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.153\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:5443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF0130 06:45:18.541224 5873 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:18Z is after 2025-08-24T17:21:41Z]\\\\nI0130 06:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6tm5s_openshift-ovn-kubernetes(705f09bd-e1b6-47fd-83db-189fbe9a7b95)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tm5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:26Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.803681 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tkcc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d0da278-9de0-4cfe-8f2b-b15ce7445923\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33144075cc4b12176da829bf3fa8f8d11b6e56fae342a4cc12e28f2a83268cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwgkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc3e82fc5b1455769c2618e3e32f21d800d7f6d510cd344068dc3ac90ccb6a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwgkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tkcc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:26Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.816740 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:26Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.827290 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:26Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.841702 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee18b84b-4e10-42ed-ac93-557943206072\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417284b540e5095c86cbed539b48be5213483a2bc5e7947dd6a148fc6f45e551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kdqjc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:26Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.849583 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mn7g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhvlk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mn7g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:26Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.857590 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z5rcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e1a8ebe-5163-47dd-a320-a286c92971c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bdr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bdr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z5rcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:26Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.867600 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:26Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.891408 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.891446 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.891456 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.891473 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.891496 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:26Z","lastTransitionTime":"2026-01-30T06:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.993924 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.993952 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.993961 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.993977 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:26 crc kubenswrapper[4520]: I0130 06:45:26.993989 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:26Z","lastTransitionTime":"2026-01-30T06:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.095908 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.095935 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.095943 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.095952 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.095959 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:27Z","lastTransitionTime":"2026-01-30T06:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.197971 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.198032 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.198044 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.198054 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.198061 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:27Z","lastTransitionTime":"2026-01-30T06:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.299951 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.300046 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.300111 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.300178 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.300233 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:27Z","lastTransitionTime":"2026-01-30T06:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.402435 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.402487 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.402497 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.402533 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.402548 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:27Z","lastTransitionTime":"2026-01-30T06:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.504179 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.504211 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.504220 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.504233 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.504255 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:27Z","lastTransitionTime":"2026-01-30T06:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.606147 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.606173 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.606180 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.606190 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.606198 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:27Z","lastTransitionTime":"2026-01-30T06:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.669872 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 19:54:46.13373939 +0000 UTC Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.708083 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.708153 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.708171 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.708202 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.708219 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:27Z","lastTransitionTime":"2026-01-30T06:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.779503 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.779557 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.779567 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.779581 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.779594 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:27Z","lastTransitionTime":"2026-01-30T06:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:27 crc kubenswrapper[4520]: E0130 06:45:27.791808 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:27Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.794598 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.794641 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.794653 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.794667 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.794675 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:27Z","lastTransitionTime":"2026-01-30T06:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:27 crc kubenswrapper[4520]: E0130 06:45:27.803410 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:27Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.805930 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.805959 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.805969 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.805979 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.805986 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:27Z","lastTransitionTime":"2026-01-30T06:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:27 crc kubenswrapper[4520]: E0130 06:45:27.814910 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:27Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.817714 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.817744 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.817756 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.817768 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.817777 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:27Z","lastTransitionTime":"2026-01-30T06:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:27 crc kubenswrapper[4520]: E0130 06:45:27.826706 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:27Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.829141 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.829169 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.829180 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.829192 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.829201 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:27Z","lastTransitionTime":"2026-01-30T06:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:27 crc kubenswrapper[4520]: E0130 06:45:27.837716 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:27Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:27 crc kubenswrapper[4520]: E0130 06:45:27.837848 4520 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.838822 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.838872 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.838883 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.838894 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.838904 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:27Z","lastTransitionTime":"2026-01-30T06:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.940782 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.940812 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.940821 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.940831 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:27 crc kubenswrapper[4520]: I0130 06:45:27.940840 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:27Z","lastTransitionTime":"2026-01-30T06:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.042702 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.042729 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.042738 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.042764 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.042774 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:28Z","lastTransitionTime":"2026-01-30T06:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.144182 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.144337 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.144423 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.144537 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.145250 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:28Z","lastTransitionTime":"2026-01-30T06:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.248200 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.248231 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.248250 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.248264 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.248274 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:28Z","lastTransitionTime":"2026-01-30T06:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.350021 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.350046 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.350056 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.350067 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.350078 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:28Z","lastTransitionTime":"2026-01-30T06:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.451702 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.451731 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.451742 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.451753 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.451762 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:28Z","lastTransitionTime":"2026-01-30T06:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.552921 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.552962 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.552976 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.552998 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.553012 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:28Z","lastTransitionTime":"2026-01-30T06:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.654975 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.655032 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.655043 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.655058 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.655068 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:28Z","lastTransitionTime":"2026-01-30T06:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.670156 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 14:12:06.074550404 +0000 UTC Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.684671 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:45:28 crc kubenswrapper[4520]: E0130 06:45:28.684781 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.685030 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:45:28 crc kubenswrapper[4520]: E0130 06:45:28.685146 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.685315 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.685593 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:28 crc kubenswrapper[4520]: E0130 06:45:28.685676 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:45:28 crc kubenswrapper[4520]: E0130 06:45:28.685588 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.757116 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.757159 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.757175 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.757190 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.757201 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:28Z","lastTransitionTime":"2026-01-30T06:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.859888 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.859914 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.859925 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.859936 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.859945 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:28Z","lastTransitionTime":"2026-01-30T06:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.962113 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.962159 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.962169 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.962183 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:28 crc kubenswrapper[4520]: I0130 06:45:28.962194 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:28Z","lastTransitionTime":"2026-01-30T06:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.064255 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.064297 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.064308 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.064325 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.064339 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:29Z","lastTransitionTime":"2026-01-30T06:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.166538 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.166571 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.166583 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.166597 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.166606 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:29Z","lastTransitionTime":"2026-01-30T06:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.268540 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.268652 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.268714 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.268783 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.268837 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:29Z","lastTransitionTime":"2026-01-30T06:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.371121 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.371167 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.371178 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.371196 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.371213 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:29Z","lastTransitionTime":"2026-01-30T06:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.473224 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.473288 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.473302 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.473316 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.473327 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:29Z","lastTransitionTime":"2026-01-30T06:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.575197 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.575229 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.575239 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.575264 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.575274 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:29Z","lastTransitionTime":"2026-01-30T06:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.670981 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 19:44:33.272307125 +0000 UTC Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.676848 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.676880 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.676894 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.676914 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.676928 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:29Z","lastTransitionTime":"2026-01-30T06:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.778881 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.778924 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.778936 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.778956 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.778973 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:29Z","lastTransitionTime":"2026-01-30T06:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.882039 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.882072 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.882086 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.882100 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.882110 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:29Z","lastTransitionTime":"2026-01-30T06:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.984071 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.984136 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.984163 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.984203 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:29 crc kubenswrapper[4520]: I0130 06:45:29.984234 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:29Z","lastTransitionTime":"2026-01-30T06:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.086189 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.086231 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.086244 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.086271 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.086288 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:30Z","lastTransitionTime":"2026-01-30T06:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.188712 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.188777 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.188789 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.188815 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.188832 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:30Z","lastTransitionTime":"2026-01-30T06:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.291202 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.291241 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.291269 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.291284 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.291293 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:30Z","lastTransitionTime":"2026-01-30T06:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.305734 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e1a8ebe-5163-47dd-a320-a286c92971c2-metrics-certs\") pod \"network-metrics-daemon-z5rcx\" (UID: \"6e1a8ebe-5163-47dd-a320-a286c92971c2\") " pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:45:30 crc kubenswrapper[4520]: E0130 06:45:30.305863 4520 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 06:45:30 crc kubenswrapper[4520]: E0130 06:45:30.305947 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e1a8ebe-5163-47dd-a320-a286c92971c2-metrics-certs podName:6e1a8ebe-5163-47dd-a320-a286c92971c2 nodeName:}" failed. No retries permitted until 2026-01-30 06:45:38.305924313 +0000 UTC m=+51.934276504 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6e1a8ebe-5163-47dd-a320-a286c92971c2-metrics-certs") pod "network-metrics-daemon-z5rcx" (UID: "6e1a8ebe-5163-47dd-a320-a286c92971c2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.393173 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.393208 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.393221 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.393237 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.393248 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:30Z","lastTransitionTime":"2026-01-30T06:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.495215 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.495259 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.495271 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.495287 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.495303 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:30Z","lastTransitionTime":"2026-01-30T06:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.597443 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.597479 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.597491 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.597504 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.597543 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:30Z","lastTransitionTime":"2026-01-30T06:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.671076 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 06:53:06.823067806 +0000 UTC Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.685911 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.686007 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:45:30 crc kubenswrapper[4520]: E0130 06:45:30.686064 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:45:30 crc kubenswrapper[4520]: E0130 06:45:30.686417 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.686537 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:30 crc kubenswrapper[4520]: E0130 06:45:30.686826 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.686919 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:45:30 crc kubenswrapper[4520]: E0130 06:45:30.687008 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.699312 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.699346 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.699356 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.699370 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.699381 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:30Z","lastTransitionTime":"2026-01-30T06:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.801851 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.801895 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.801907 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.801920 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.801930 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:30Z","lastTransitionTime":"2026-01-30T06:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.903786 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.903823 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.903834 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.903851 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:30 crc kubenswrapper[4520]: I0130 06:45:30.903865 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:30Z","lastTransitionTime":"2026-01-30T06:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.005720 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.005756 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.005766 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.005782 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.005793 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:31Z","lastTransitionTime":"2026-01-30T06:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.107429 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.107491 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.107505 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.107553 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.107570 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:31Z","lastTransitionTime":"2026-01-30T06:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.209497 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.209556 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.209567 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.209581 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.209590 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:31Z","lastTransitionTime":"2026-01-30T06:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.311433 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.311482 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.311494 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.311505 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.311534 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:31Z","lastTransitionTime":"2026-01-30T06:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.413124 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.413161 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.413173 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.413191 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.413203 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:31Z","lastTransitionTime":"2026-01-30T06:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.514730 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.514763 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.514776 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.514794 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.514808 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:31Z","lastTransitionTime":"2026-01-30T06:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.616078 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.616266 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.616357 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.616440 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.616508 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:31Z","lastTransitionTime":"2026-01-30T06:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.671923 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 03:36:04.194068981 +0000 UTC Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.685959 4520 scope.go:117] "RemoveContainer" containerID="0b7ae62b9399f287aa8884a9a8a3251f58032f7e21807cba84609c063ba525cf" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.718395 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.718425 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.718435 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.718448 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.718459 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:31Z","lastTransitionTime":"2026-01-30T06:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.820012 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.820085 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.820094 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.820127 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.820136 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:31Z","lastTransitionTime":"2026-01-30T06:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.915897 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tm5s_705f09bd-e1b6-47fd-83db-189fbe9a7b95/ovnkube-controller/1.log" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.918333 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" event={"ID":"705f09bd-e1b6-47fd-83db-189fbe9a7b95","Type":"ContainerStarted","Data":"83bec6fbb06733bdb4237b84ef9807ba374424be1c39c100a82af30d3eba10b9"} Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.918503 4520 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.922354 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.922400 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.922410 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.922427 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.922440 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:31Z","lastTransitionTime":"2026-01-30T06:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.932413 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:31Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.942761 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6th8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed0fb361-02d3-4a8d-90c6-2c386499c01f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3901f212dddc0d99128662fb56e09f6382b60847a630f4da8d2a272ca5064536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6th8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:31Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.957680 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:31Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.971247 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:31Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:31 crc kubenswrapper[4520]: I0130 06:45:31.980687 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hf7k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aedbdb4a22aec02ade41b850034115ba0e6b584e2e7195b6ab548ef4291665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqhqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hf7k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:31Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.002985 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f51275-c0b1-4467-bf4a-ef848e3521df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e259c411b8e91626ab987a1ca449092d507e84f0e06c3cd291b6e8498099a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dkqtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:32Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.016182 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:32Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.024916 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.024951 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.024961 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.024980 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.024991 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:32Z","lastTransitionTime":"2026-01-30T06:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.027897 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:32Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.037121 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:32Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.051272 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705f09bd-e1b6-47fd-83db-189fbe9a7b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83bec6fbb06733bdb4237b84ef9807ba374424be1c39c100a82af30d3eba10b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b7ae62b9399f287aa8884a9a8a3251f58032f7e21807cba84609c063ba525cf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T06:45:18Z\\\",\\\"message\\\":\\\"onfig-daemon-dkqtt\\\\nI0130 06:45:18.541222 5873 services_controller.go:443] Built service openshift-operator-lifecycle-manager/packageserver-service LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.153\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:5443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF0130 06:45:18.541224 5873 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:18Z is after 2025-08-24T17:21:41Z]\\\\nI0130 06:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tm5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:32Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.061336 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tkcc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d0da278-9de0-4cfe-8f2b-b15ce7445923\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33144075cc4b12176da829bf3fa8f8d11b6e56fae342a4cc12e28f2a83268cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwgkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc3e82fc5b1455769c2618e3e32f21d800d7f6d510cd344068dc3ac90ccb6a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwgkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tkcc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:32Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.075243 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:32Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.085042 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:32Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.093555 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.102624 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee18b84b-4e10-42ed-ac93-557943206072\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417284b540e5095c86cbed539b48be5213483a2bc5e7947dd6a148fc6f45e551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kdqjc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:32Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.111551 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mn7g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhvlk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mn7g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:32Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.119968 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z5rcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e1a8ebe-5163-47dd-a320-a286c92971c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bdr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bdr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z5rcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:32Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.127345 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.127384 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.127395 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.127411 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.127421 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:32Z","lastTransitionTime":"2026-01-30T06:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.130996 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:32Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.230275 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.230312 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.230323 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.230345 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.230357 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:32Z","lastTransitionTime":"2026-01-30T06:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.332265 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.332310 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.332322 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.332343 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.332355 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:32Z","lastTransitionTime":"2026-01-30T06:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.434544 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.434577 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.434587 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.434621 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.434632 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:32Z","lastTransitionTime":"2026-01-30T06:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.536318 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.536353 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.536364 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.536380 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.536390 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:32Z","lastTransitionTime":"2026-01-30T06:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.637850 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.637876 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.637886 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.637898 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.637908 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:32Z","lastTransitionTime":"2026-01-30T06:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.672401 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 06:44:16.610886241 +0000 UTC Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.684762 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.684804 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.684803 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.684865 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:45:32 crc kubenswrapper[4520]: E0130 06:45:32.684865 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:45:32 crc kubenswrapper[4520]: E0130 06:45:32.684943 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:45:32 crc kubenswrapper[4520]: E0130 06:45:32.684979 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:45:32 crc kubenswrapper[4520]: E0130 06:45:32.685037 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.739807 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.739859 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.739869 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.739881 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.739892 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:32Z","lastTransitionTime":"2026-01-30T06:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.841933 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.841980 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.841989 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.842000 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.842008 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:32Z","lastTransitionTime":"2026-01-30T06:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.923557 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tm5s_705f09bd-e1b6-47fd-83db-189fbe9a7b95/ovnkube-controller/2.log" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.924182 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tm5s_705f09bd-e1b6-47fd-83db-189fbe9a7b95/ovnkube-controller/1.log" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.926759 4520 generic.go:334] "Generic (PLEG): container finished" podID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerID="83bec6fbb06733bdb4237b84ef9807ba374424be1c39c100a82af30d3eba10b9" exitCode=1 Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.926788 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" event={"ID":"705f09bd-e1b6-47fd-83db-189fbe9a7b95","Type":"ContainerDied","Data":"83bec6fbb06733bdb4237b84ef9807ba374424be1c39c100a82af30d3eba10b9"} Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.926831 4520 scope.go:117] "RemoveContainer" containerID="0b7ae62b9399f287aa8884a9a8a3251f58032f7e21807cba84609c063ba525cf" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.927752 4520 scope.go:117] "RemoveContainer" containerID="83bec6fbb06733bdb4237b84ef9807ba374424be1c39c100a82af30d3eba10b9" Jan 30 06:45:32 crc kubenswrapper[4520]: E0130 06:45:32.928689 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6tm5s_openshift-ovn-kubernetes(705f09bd-e1b6-47fd-83db-189fbe9a7b95)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.940649 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:32Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.943358 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.943380 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.943389 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.943400 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.943408 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:32Z","lastTransitionTime":"2026-01-30T06:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.949975 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mn7g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhvlk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mn7g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:32Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.959096 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z5rcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e1a8ebe-5163-47dd-a320-a286c92971c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bdr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bdr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z5rcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:32Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.968369 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:32Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.977767 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6th8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed0fb361-02d3-4a8d-90c6-2c386499c01f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3901f212dddc0d99128662fb56e09f6382b60847a630f4da8d2a272ca5064536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6th8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:32Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.986785 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hf7k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aedbdb4a22aec02ade41b850034115ba0e6b584e2e7195b6ab548ef4291665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqhqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hf7k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:32Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:32 crc kubenswrapper[4520]: I0130 06:45:32.995026 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f51275-c0b1-4467-bf4a-ef848e3521df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e259c411b8e91626ab987a1ca449092d507e84f0e06c3cd291b6e8498099a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dkqtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:32Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.004794 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:33Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.013075 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:33Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.020826 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:33Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.029703 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:33Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.037197 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:33Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.045668 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.045694 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.045703 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.045719 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.045731 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:33Z","lastTransitionTime":"2026-01-30T06:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.053838 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:33Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.062100 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:33Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.072394 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee18b84b-4e10-42ed-ac93-557943206072\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417284b540e5095c86cbed539b48be5213483a2bc5e7947dd6a148fc6f45e551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kdqjc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:33Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.084881 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705f09bd-e1b6-47fd-83db-189fbe9a7b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83bec6fbb06733bdb4237b84ef9807ba374424be1c39c100a82af30d3eba10b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b7ae62b9399f287aa8884a9a8a3251f58032f7e21807cba84609c063ba525cf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T06:45:18Z\\\",\\\"message\\\":\\\"onfig-daemon-dkqtt\\\\nI0130 06:45:18.541222 5873 services_controller.go:443] Built service openshift-operator-lifecycle-manager/packageserver-service LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.153\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:5443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF0130 06:45:18.541224 5873 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:18Z is after 2025-08-24T17:21:41Z]\\\\nI0130 06:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83bec6fbb06733bdb4237b84ef9807ba374424be1c39c100a82af30d3eba10b9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T06:45:32Z\\\",\\\"message\\\":\\\"onAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0130 06:45:32.356581 6080 lb_config.go:1031] Cluster endpoints for openshift-kube-apiserver-operator/metrics for network=default are: map[]\\\\nI0130 06:45:32.356588 6080 services_controller.go:443] Built service openshift-kube-apiserver-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.109\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0130 06:45:32.355871 6080 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-mn7g2\\\\nI0130 06:45:32.356601 6080 services_controller.go:444] Built service openshift-kube-apiserver-operator/metrics LB per-node configs for network=default: []services.l\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tm5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:33Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.092609 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tkcc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d0da278-9de0-4cfe-8f2b-b15ce7445923\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33144075cc4b12176da829bf3fa8f8d11b6e56fae342a4cc12e28f2a83268cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwgkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc3e82fc5b1455769c2618e3e32f21d800d7f6d510cd344068dc3ac90ccb6a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwgkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tkcc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:33Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.148273 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.148303 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.148313 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.148326 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.148336 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:33Z","lastTransitionTime":"2026-01-30T06:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.250360 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.250383 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.250393 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.250404 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.250412 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:33Z","lastTransitionTime":"2026-01-30T06:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.352486 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.352629 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.352817 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.352995 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.353164 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:33Z","lastTransitionTime":"2026-01-30T06:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.454909 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.454937 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.454946 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.454955 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.454962 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:33Z","lastTransitionTime":"2026-01-30T06:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.556858 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.556877 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.556884 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.556895 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.556902 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:33Z","lastTransitionTime":"2026-01-30T06:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.658423 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.658558 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.658646 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.658743 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.658821 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:33Z","lastTransitionTime":"2026-01-30T06:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.672815 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 11:57:33.46701985 +0000 UTC Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.761376 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.761848 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.761866 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.761891 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.761906 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:33Z","lastTransitionTime":"2026-01-30T06:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.863792 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.863831 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.863845 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.863860 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.863872 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:33Z","lastTransitionTime":"2026-01-30T06:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.931303 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tm5s_705f09bd-e1b6-47fd-83db-189fbe9a7b95/ovnkube-controller/2.log" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.934730 4520 scope.go:117] "RemoveContainer" containerID="83bec6fbb06733bdb4237b84ef9807ba374424be1c39c100a82af30d3eba10b9" Jan 30 06:45:33 crc kubenswrapper[4520]: E0130 06:45:33.934885 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6tm5s_openshift-ovn-kubernetes(705f09bd-e1b6-47fd-83db-189fbe9a7b95)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.961640 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:33Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.965299 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.965331 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.965344 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.965358 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.965367 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:33Z","lastTransitionTime":"2026-01-30T06:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.978936 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mn7g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhvlk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mn7g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:33Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:33 crc kubenswrapper[4520]: I0130 06:45:33.992442 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z5rcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e1a8ebe-5163-47dd-a320-a286c92971c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bdr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bdr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z5rcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:33Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.005832 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:34Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.014838 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6th8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed0fb361-02d3-4a8d-90c6-2c386499c01f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3901f212dddc0d99128662fb56e09f6382b60847a630f4da8d2a272ca5064536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6th8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:34Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.023693 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:34Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.031776 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:34Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.039465 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:34Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.047667 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hf7k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aedbdb4a22aec02ade41b850034115ba0e6b584e2e7195b6ab548ef4291665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqhqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hf7k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:34Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.055246 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f51275-c0b1-4467-bf4a-ef848e3521df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e259c411b8e91626ab987a1ca449092d507e84f0e06c3cd291b6e8498099a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dkqtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:34Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.065633 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:34Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.067303 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.067386 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.067398 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.067415 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.067429 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:34Z","lastTransitionTime":"2026-01-30T06:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.074687 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:34Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.083898 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee18b84b-4e10-42ed-ac93-557943206072\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417284b540e5095c86cbed539b48be5213483a2bc5e7947dd6a148fc6f45e551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kdqjc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:34Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.097260 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705f09bd-e1b6-47fd-83db-189fbe9a7b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83bec6fbb06733bdb4237b84ef9807ba374424be1c39c100a82af30d3eba10b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83bec6fbb06733bdb4237b84ef9807ba374424be1c39c100a82af30d3eba10b9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T06:45:32Z\\\",\\\"message\\\":\\\"onAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0130 06:45:32.356581 6080 lb_config.go:1031] Cluster endpoints for openshift-kube-apiserver-operator/metrics for network=default are: map[]\\\\nI0130 06:45:32.356588 6080 services_controller.go:443] Built service openshift-kube-apiserver-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.109\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0130 06:45:32.355871 6080 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-mn7g2\\\\nI0130 06:45:32.356601 6080 services_controller.go:444] Built service openshift-kube-apiserver-operator/metrics LB per-node configs for network=default: []services.l\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6tm5s_openshift-ovn-kubernetes(705f09bd-e1b6-47fd-83db-189fbe9a7b95)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tm5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:34Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.104642 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tkcc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d0da278-9de0-4cfe-8f2b-b15ce7445923\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33144075cc4b12176da829bf3fa8f8d11b6e56fae342a4cc12e28f2a83268cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwgkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc3e82fc5b1455769c2618e3e32f21d800d7f6d510cd344068dc3ac90ccb6a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwgkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tkcc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:34Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.117133 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:34Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.132169 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:34Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.169289 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.169324 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.169333 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.169350 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.169363 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:34Z","lastTransitionTime":"2026-01-30T06:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.271285 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.271327 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.271338 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.271355 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.271368 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:34Z","lastTransitionTime":"2026-01-30T06:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.373879 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.373921 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.373938 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.373957 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.373975 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:34Z","lastTransitionTime":"2026-01-30T06:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.476213 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.476249 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.476258 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.476285 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.476298 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:34Z","lastTransitionTime":"2026-01-30T06:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.578632 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.578673 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.578688 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.578706 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.578717 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:34Z","lastTransitionTime":"2026-01-30T06:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.639690 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.648154 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.655565 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:34Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.664424 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mn7g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhvlk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mn7g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:34Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.670753 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z5rcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e1a8ebe-5163-47dd-a320-a286c92971c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bdr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bdr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z5rcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:34Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.673303 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 22:17:52.781516483 +0000 UTC Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.679277 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:34Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.680434 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.680468 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.680478 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.680493 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.680502 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:34Z","lastTransitionTime":"2026-01-30T06:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.685160 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.685201 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.685204 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.685348 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:34 crc kubenswrapper[4520]: E0130 06:45:34.685405 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:45:34 crc kubenswrapper[4520]: E0130 06:45:34.685478 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:45:34 crc kubenswrapper[4520]: E0130 06:45:34.685607 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:45:34 crc kubenswrapper[4520]: E0130 06:45:34.685671 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.685791 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6th8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed0fb361-02d3-4a8d-90c6-2c386499c01f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3901f212dddc0d99128662fb56e09f6382b60847a630f4da8d2a272ca5064536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6th8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:34Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.693485 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hf7k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aedbdb4a22aec02ade41b850034115ba0e6b584e2e7195b6ab548ef4291665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqhqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hf7k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:34Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.701364 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f51275-c0b1-4467-bf4a-ef848e3521df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e259c411b8e91626ab987a1ca449092d507e84f0e06c3cd291b6e8498099a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dkqtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:34Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.709955 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:34Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.719090 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:34Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.727083 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:34Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.734422 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:34Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.741739 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:34Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.754311 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:34Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.762825 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:34Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.772647 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee18b84b-4e10-42ed-ac93-557943206072\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417284b540e5095c86cbed539b48be5213483a2bc5e7947dd6a148fc6f45e551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kdqjc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:34Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.782684 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.782715 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.782742 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.782759 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.782771 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:34Z","lastTransitionTime":"2026-01-30T06:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.785733 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705f09bd-e1b6-47fd-83db-189fbe9a7b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83bec6fbb06733bdb4237b84ef9807ba374424be1c39c100a82af30d3eba10b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83bec6fbb06733bdb4237b84ef9807ba374424be1c39c100a82af30d3eba10b9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T06:45:32Z\\\",\\\"message\\\":\\\"onAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0130 06:45:32.356581 6080 lb_config.go:1031] Cluster endpoints for openshift-kube-apiserver-operator/metrics for network=default are: map[]\\\\nI0130 06:45:32.356588 6080 services_controller.go:443] Built service openshift-kube-apiserver-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.109\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0130 06:45:32.355871 6080 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-mn7g2\\\\nI0130 06:45:32.356601 6080 services_controller.go:444] Built service openshift-kube-apiserver-operator/metrics LB per-node configs for network=default: []services.l\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6tm5s_openshift-ovn-kubernetes(705f09bd-e1b6-47fd-83db-189fbe9a7b95)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tm5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:34Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.793796 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tkcc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d0da278-9de0-4cfe-8f2b-b15ce7445923\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33144075cc4b12176da829bf3fa8f8d11b6e56fae342a4cc12e28f2a83268cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwgkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc3e82fc5b1455769c2618e3e32f21d800d7f6d510cd344068dc3ac90ccb6a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwgkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tkcc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:34Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.885654 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.885691 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.885705 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.885719 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.885730 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:34Z","lastTransitionTime":"2026-01-30T06:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.987897 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.987931 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.987946 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.987963 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:34 crc kubenswrapper[4520]: I0130 06:45:34.987978 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:34Z","lastTransitionTime":"2026-01-30T06:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.090208 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.090243 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.090253 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.090278 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.090291 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:35Z","lastTransitionTime":"2026-01-30T06:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.192196 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.192233 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.192243 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.192257 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.192280 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:35Z","lastTransitionTime":"2026-01-30T06:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.293720 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.293744 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.293755 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.293766 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.293774 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:35Z","lastTransitionTime":"2026-01-30T06:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.395557 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.395580 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.395589 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.395599 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.395609 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:35Z","lastTransitionTime":"2026-01-30T06:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.497667 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.497718 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.497732 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.497751 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.497764 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:35Z","lastTransitionTime":"2026-01-30T06:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.601186 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.601211 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.601223 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.601237 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.601247 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:35Z","lastTransitionTime":"2026-01-30T06:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.674448 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 19:09:47.917090853 +0000 UTC Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.703096 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.703124 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.703132 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.703142 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.703150 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:35Z","lastTransitionTime":"2026-01-30T06:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.805265 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.805304 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.805314 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.805324 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.805332 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:35Z","lastTransitionTime":"2026-01-30T06:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.906928 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.906982 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.906992 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.907015 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:35 crc kubenswrapper[4520]: I0130 06:45:35.907027 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:35Z","lastTransitionTime":"2026-01-30T06:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.009346 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.009377 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.009387 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.009400 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.009411 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:36Z","lastTransitionTime":"2026-01-30T06:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.111652 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.111683 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.111692 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.111706 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.111713 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:36Z","lastTransitionTime":"2026-01-30T06:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.213927 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.213968 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.213978 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.213991 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.214000 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:36Z","lastTransitionTime":"2026-01-30T06:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.316419 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.316459 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.316470 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.316485 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.316498 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:36Z","lastTransitionTime":"2026-01-30T06:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.371821 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:45:36 crc kubenswrapper[4520]: E0130 06:45:36.372050 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:46:08.372024504 +0000 UTC m=+82.000376685 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.418330 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.418361 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.418371 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.418385 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.418395 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:36Z","lastTransitionTime":"2026-01-30T06:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.472718 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.472752 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.472773 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.472796 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:45:36 crc kubenswrapper[4520]: E0130 06:45:36.472906 4520 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 06:45:36 crc kubenswrapper[4520]: E0130 06:45:36.472928 4520 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 06:45:36 crc kubenswrapper[4520]: E0130 06:45:36.472938 4520 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 06:45:36 crc kubenswrapper[4520]: E0130 06:45:36.472934 4520 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 06:45:36 crc kubenswrapper[4520]: E0130 06:45:36.472979 4520 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 06:45:36 crc kubenswrapper[4520]: E0130 06:45:36.472986 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 06:46:08.472968815 +0000 UTC m=+82.101320996 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 06:45:36 crc kubenswrapper[4520]: E0130 06:45:36.472906 4520 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 06:45:36 crc kubenswrapper[4520]: E0130 06:45:36.472949 4520 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 06:45:36 crc kubenswrapper[4520]: E0130 06:45:36.473000 4520 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 06:45:36 crc kubenswrapper[4520]: E0130 06:45:36.473056 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 06:46:08.473031463 +0000 UTC m=+82.101383645 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 06:45:36 crc kubenswrapper[4520]: E0130 06:45:36.473075 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 06:46:08.473067421 +0000 UTC m=+82.101419602 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 06:45:36 crc kubenswrapper[4520]: E0130 06:45:36.473090 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 06:46:08.473083922 +0000 UTC m=+82.101436103 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.520704 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.520746 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.520760 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.520774 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.520784 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:36Z","lastTransitionTime":"2026-01-30T06:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.622188 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.622228 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.622237 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.622249 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.622258 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:36Z","lastTransitionTime":"2026-01-30T06:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.674930 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 12:57:51.888314517 +0000 UTC Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.685447 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.685533 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:45:36 crc kubenswrapper[4520]: E0130 06:45:36.685596 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.685618 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:45:36 crc kubenswrapper[4520]: E0130 06:45:36.685705 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:45:36 crc kubenswrapper[4520]: E0130 06:45:36.685762 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.685983 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:45:36 crc kubenswrapper[4520]: E0130 06:45:36.688494 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.696397 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:36Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.704759 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6th8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed0fb361-02d3-4a8d-90c6-2c386499c01f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3901f212dddc0d99128662fb56e09f6382b60847a630f4da8d2a272ca5064536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6th8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:36Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.713107 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:36Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.719874 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hf7k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aedbdb4a22aec02ade41b850034115ba0e6b584e2e7195b6ab548ef4291665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqhqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hf7k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:36Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.724425 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.724576 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.724671 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.724761 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.724835 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:36Z","lastTransitionTime":"2026-01-30T06:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.728040 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f51275-c0b1-4467-bf4a-ef848e3521df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e259c411b8e91626ab987a1ca449092d507e84f0e06c3cd291b6e8498099a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dkqtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:36Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.737814 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:36Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.748425 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:36Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.758122 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:36Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.765497 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:36Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.772575 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tkcc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d0da278-9de0-4cfe-8f2b-b15ce7445923\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33144075cc4b12176da829bf3fa8f8d11b6e56fae342a4cc12e28f2a83268cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwgkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc3e82fc5b1455769c2618e3e32f21d800d7f6d510cd344068dc3ac90ccb6a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwgkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tkcc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:36Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.785628 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:36Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.795933 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:36Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.805545 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee18b84b-4e10-42ed-ac93-557943206072\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417284b540e5095c86cbed539b48be5213483a2bc5e7947dd6a148fc6f45e551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kdqjc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:36Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.817085 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705f09bd-e1b6-47fd-83db-189fbe9a7b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83bec6fbb06733bdb4237b84ef9807ba374424be1c39c100a82af30d3eba10b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83bec6fbb06733bdb4237b84ef9807ba374424be1c39c100a82af30d3eba10b9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T06:45:32Z\\\",\\\"message\\\":\\\"onAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0130 06:45:32.356581 6080 lb_config.go:1031] Cluster endpoints for openshift-kube-apiserver-operator/metrics for network=default are: map[]\\\\nI0130 06:45:32.356588 6080 services_controller.go:443] Built service openshift-kube-apiserver-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.109\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0130 06:45:32.355871 6080 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-mn7g2\\\\nI0130 06:45:32.356601 6080 services_controller.go:444] Built service openshift-kube-apiserver-operator/metrics LB per-node configs for network=default: []services.l\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6tm5s_openshift-ovn-kubernetes(705f09bd-e1b6-47fd-83db-189fbe9a7b95)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tm5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:36Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.824009 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z5rcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e1a8ebe-5163-47dd-a320-a286c92971c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bdr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bdr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z5rcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:36Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.826091 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.826116 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.826125 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.826139 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.826154 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:36Z","lastTransitionTime":"2026-01-30T06:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.831671 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56fecd5a-4387-4e8d-b999-9b893d10dda8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20f365e319337b1d1c71d80b5631c2264c907a4b8c06d78c1e1c2ed64915fdfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7cfdbf2ac64a3089a349ad033770210d594956c8395afe2b65ece4cd9a234b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffb071ac9d3d42a711e23a6868eca346b62b7f4802226ed4283e895c1db00216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e33b3a1734c6dbfb28a8708410e6b63edaaa276054ebb52e1ae99efdeeb2cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e33b3a1734c6dbfb28a8708410e6b63edaaa276054ebb52e1ae99efdeeb2cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:36Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.840263 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:36Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.848918 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mn7g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhvlk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mn7g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:36Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.927452 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.927483 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.927492 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.927547 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:36 crc kubenswrapper[4520]: I0130 06:45:36.927558 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:36Z","lastTransitionTime":"2026-01-30T06:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.029549 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.029589 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.029600 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.029617 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.029630 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:37Z","lastTransitionTime":"2026-01-30T06:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.131704 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.131753 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.131764 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.131783 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.131791 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:37Z","lastTransitionTime":"2026-01-30T06:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.233598 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.233676 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.233688 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.233703 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.233716 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:37Z","lastTransitionTime":"2026-01-30T06:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.335371 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.335422 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.335433 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.335444 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.335456 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:37Z","lastTransitionTime":"2026-01-30T06:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.437406 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.437438 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.437447 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.437478 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.437488 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:37Z","lastTransitionTime":"2026-01-30T06:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.539576 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.539619 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.539630 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.539646 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.539658 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:37Z","lastTransitionTime":"2026-01-30T06:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.641598 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.641636 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.641645 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.641661 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.641675 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:37Z","lastTransitionTime":"2026-01-30T06:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.675571 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 21:37:38.579162736 +0000 UTC Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.743933 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.743964 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.743975 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.743992 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.744003 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:37Z","lastTransitionTime":"2026-01-30T06:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.845920 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.845944 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.845953 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.845963 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.845972 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:37Z","lastTransitionTime":"2026-01-30T06:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.947405 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.947453 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.947464 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.947476 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:37 crc kubenswrapper[4520]: I0130 06:45:37.947485 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:37Z","lastTransitionTime":"2026-01-30T06:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.049089 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.049195 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.049250 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.049319 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.049385 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:38Z","lastTransitionTime":"2026-01-30T06:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.114724 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.114763 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.114791 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.114803 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.114811 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:38Z","lastTransitionTime":"2026-01-30T06:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:38 crc kubenswrapper[4520]: E0130 06:45:38.125168 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:38Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.127652 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.127685 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.127694 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.127705 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.127715 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:38Z","lastTransitionTime":"2026-01-30T06:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:38 crc kubenswrapper[4520]: E0130 06:45:38.136454 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:38Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.138715 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.138741 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.138764 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.138774 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.138783 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:38Z","lastTransitionTime":"2026-01-30T06:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:38 crc kubenswrapper[4520]: E0130 06:45:38.147430 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:38Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.149724 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.149766 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.149775 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.149784 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.149792 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:38Z","lastTransitionTime":"2026-01-30T06:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:38 crc kubenswrapper[4520]: E0130 06:45:38.157914 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:38Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.160737 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.160816 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.160870 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.160927 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.160974 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:38Z","lastTransitionTime":"2026-01-30T06:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:38 crc kubenswrapper[4520]: E0130 06:45:38.169293 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:38Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:38 crc kubenswrapper[4520]: E0130 06:45:38.169530 4520 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.170545 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.170646 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.170747 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.170819 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.170870 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:38Z","lastTransitionTime":"2026-01-30T06:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.272922 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.272962 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.272973 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.272990 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.273001 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:38Z","lastTransitionTime":"2026-01-30T06:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.375245 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.375291 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.375302 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.375314 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.375323 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:38Z","lastTransitionTime":"2026-01-30T06:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.390796 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e1a8ebe-5163-47dd-a320-a286c92971c2-metrics-certs\") pod \"network-metrics-daemon-z5rcx\" (UID: \"6e1a8ebe-5163-47dd-a320-a286c92971c2\") " pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:45:38 crc kubenswrapper[4520]: E0130 06:45:38.390966 4520 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 06:45:38 crc kubenswrapper[4520]: E0130 06:45:38.391024 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e1a8ebe-5163-47dd-a320-a286c92971c2-metrics-certs podName:6e1a8ebe-5163-47dd-a320-a286c92971c2 nodeName:}" failed. No retries permitted until 2026-01-30 06:45:54.391004723 +0000 UTC m=+68.019356904 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6e1a8ebe-5163-47dd-a320-a286c92971c2-metrics-certs") pod "network-metrics-daemon-z5rcx" (UID: "6e1a8ebe-5163-47dd-a320-a286c92971c2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.476641 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.476668 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.476677 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.476688 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.476696 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:38Z","lastTransitionTime":"2026-01-30T06:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.578062 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.578091 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.578101 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.578113 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.578123 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:38Z","lastTransitionTime":"2026-01-30T06:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.675997 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 16:46:52.305235967 +0000 UTC Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.680836 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.680875 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.680885 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.680897 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.680908 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:38Z","lastTransitionTime":"2026-01-30T06:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.685192 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.685201 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.685220 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:45:38 crc kubenswrapper[4520]: E0130 06:45:38.685298 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.685306 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:45:38 crc kubenswrapper[4520]: E0130 06:45:38.685356 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:45:38 crc kubenswrapper[4520]: E0130 06:45:38.685377 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:45:38 crc kubenswrapper[4520]: E0130 06:45:38.685432 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.783195 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.783227 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.783237 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.783248 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.783257 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:38Z","lastTransitionTime":"2026-01-30T06:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.888344 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.888379 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.888388 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.888399 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.888408 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:38Z","lastTransitionTime":"2026-01-30T06:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.989935 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.989966 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.989975 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.989987 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:38 crc kubenswrapper[4520]: I0130 06:45:38.989995 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:38Z","lastTransitionTime":"2026-01-30T06:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.091840 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.091874 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.091885 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.091894 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.091902 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:39Z","lastTransitionTime":"2026-01-30T06:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.193936 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.193982 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.193999 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.194019 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.194034 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:39Z","lastTransitionTime":"2026-01-30T06:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.295465 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.295504 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.295536 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.295551 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.295561 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:39Z","lastTransitionTime":"2026-01-30T06:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.397672 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.397705 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.397715 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.397728 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.397738 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:39Z","lastTransitionTime":"2026-01-30T06:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.500464 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.500529 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.500545 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.500562 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.500574 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:39Z","lastTransitionTime":"2026-01-30T06:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.602982 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.603014 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.603023 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.603040 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.603053 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:39Z","lastTransitionTime":"2026-01-30T06:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.677024 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 04:16:15.421538668 +0000 UTC Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.705004 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.705056 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.705069 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.705091 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.705104 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:39Z","lastTransitionTime":"2026-01-30T06:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.806800 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.806830 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.806839 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.806852 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.806860 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:39Z","lastTransitionTime":"2026-01-30T06:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.908769 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.908823 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.908833 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.908857 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:39 crc kubenswrapper[4520]: I0130 06:45:39.908867 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:39Z","lastTransitionTime":"2026-01-30T06:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.010221 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.010270 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.010280 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.010302 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.010313 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:40Z","lastTransitionTime":"2026-01-30T06:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.112392 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.112421 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.112431 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.112443 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.112452 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:40Z","lastTransitionTime":"2026-01-30T06:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.214142 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.214184 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.214193 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.214205 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.214214 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:40Z","lastTransitionTime":"2026-01-30T06:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.315919 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.315949 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.315958 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.315968 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.315975 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:40Z","lastTransitionTime":"2026-01-30T06:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.418126 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.418161 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.418172 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.418183 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.418192 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:40Z","lastTransitionTime":"2026-01-30T06:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.519806 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.519844 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.519857 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.519875 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.519889 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:40Z","lastTransitionTime":"2026-01-30T06:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.621125 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.621153 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.621163 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.621174 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.621183 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:40Z","lastTransitionTime":"2026-01-30T06:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.678179 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 06:12:01.562911284 +0000 UTC Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.685459 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.685496 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.685557 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:45:40 crc kubenswrapper[4520]: E0130 06:45:40.685665 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.685799 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:45:40 crc kubenswrapper[4520]: E0130 06:45:40.685862 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:45:40 crc kubenswrapper[4520]: E0130 06:45:40.686041 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:45:40 crc kubenswrapper[4520]: E0130 06:45:40.686142 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.722737 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.722761 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.722768 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.722794 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.722801 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:40Z","lastTransitionTime":"2026-01-30T06:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.824854 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.824941 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.824998 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.825055 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.825114 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:40Z","lastTransitionTime":"2026-01-30T06:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.926429 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.926463 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.926474 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.926486 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:40 crc kubenswrapper[4520]: I0130 06:45:40.926496 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:40Z","lastTransitionTime":"2026-01-30T06:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.028624 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.028660 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.028670 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.028679 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.028687 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:41Z","lastTransitionTime":"2026-01-30T06:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.130000 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.130023 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.130032 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.130041 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.130059 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:41Z","lastTransitionTime":"2026-01-30T06:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.231561 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.231606 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.231615 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.231624 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.231633 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:41Z","lastTransitionTime":"2026-01-30T06:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.333565 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.333592 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.333603 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.333615 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.333623 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:41Z","lastTransitionTime":"2026-01-30T06:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.435382 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.435467 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.435552 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.435608 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.435652 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:41Z","lastTransitionTime":"2026-01-30T06:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.537494 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.537570 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.537584 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.537603 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.537616 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:41Z","lastTransitionTime":"2026-01-30T06:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.640059 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.640106 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.640118 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.640140 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.640153 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:41Z","lastTransitionTime":"2026-01-30T06:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.679197 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 00:54:56.434926701 +0000 UTC Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.742210 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.742361 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.742431 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.742493 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.742572 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:41Z","lastTransitionTime":"2026-01-30T06:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.844726 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.844758 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.844771 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.844791 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.844801 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:41Z","lastTransitionTime":"2026-01-30T06:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.946279 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.946311 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.946320 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.946332 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:41 crc kubenswrapper[4520]: I0130 06:45:41.946342 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:41Z","lastTransitionTime":"2026-01-30T06:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.048428 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.048460 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.048471 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.048484 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.048494 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:42Z","lastTransitionTime":"2026-01-30T06:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.149938 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.149961 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.149969 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.149981 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.149992 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:42Z","lastTransitionTime":"2026-01-30T06:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.251332 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.251362 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.251371 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.251381 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.251389 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:42Z","lastTransitionTime":"2026-01-30T06:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.353321 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.353359 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.353368 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.353381 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.353392 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:42Z","lastTransitionTime":"2026-01-30T06:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.454938 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.454964 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.454973 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.454986 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.454994 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:42Z","lastTransitionTime":"2026-01-30T06:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.556558 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.556602 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.556613 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.556626 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.556635 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:42Z","lastTransitionTime":"2026-01-30T06:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.659578 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.659633 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.659644 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.659659 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.659670 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:42Z","lastTransitionTime":"2026-01-30T06:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.680040 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 12:00:22.722232749 +0000 UTC Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.685442 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.685453 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.685670 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.685453 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:42 crc kubenswrapper[4520]: E0130 06:45:42.685712 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:45:42 crc kubenswrapper[4520]: E0130 06:45:42.685613 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:45:42 crc kubenswrapper[4520]: E0130 06:45:42.685780 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:45:42 crc kubenswrapper[4520]: E0130 06:45:42.685869 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.761445 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.761469 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.761477 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.761487 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.761730 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:42Z","lastTransitionTime":"2026-01-30T06:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.863618 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.863647 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.863655 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.863666 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.863674 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:42Z","lastTransitionTime":"2026-01-30T06:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.966822 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.966861 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.966875 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.966893 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:42 crc kubenswrapper[4520]: I0130 06:45:42.966903 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:42Z","lastTransitionTime":"2026-01-30T06:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.068458 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.068485 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.068494 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.068506 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.068539 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:43Z","lastTransitionTime":"2026-01-30T06:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.169600 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.169631 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.169639 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.169652 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.169660 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:43Z","lastTransitionTime":"2026-01-30T06:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.271792 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.271856 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.271867 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.271886 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.271898 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:43Z","lastTransitionTime":"2026-01-30T06:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.373375 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.373416 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.373427 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.373438 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.373446 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:43Z","lastTransitionTime":"2026-01-30T06:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.474681 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.474708 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.474716 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.474728 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.474736 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:43Z","lastTransitionTime":"2026-01-30T06:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.576138 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.576168 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.576179 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.576191 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.576199 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:43Z","lastTransitionTime":"2026-01-30T06:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.677549 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.677578 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.677588 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.677599 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.677607 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:43Z","lastTransitionTime":"2026-01-30T06:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.681020 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 16:11:37.613382385 +0000 UTC Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.779363 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.779390 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.779398 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.779409 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.779417 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:43Z","lastTransitionTime":"2026-01-30T06:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.880870 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.880901 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.880911 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.880922 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.880945 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:43Z","lastTransitionTime":"2026-01-30T06:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.987157 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.987191 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.987201 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.987214 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:43 crc kubenswrapper[4520]: I0130 06:45:43.987223 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:43Z","lastTransitionTime":"2026-01-30T06:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.089251 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.089273 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.089281 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.089291 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.089308 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:44Z","lastTransitionTime":"2026-01-30T06:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.191191 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.191222 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.191232 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.191245 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.191255 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:44Z","lastTransitionTime":"2026-01-30T06:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.293096 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.293153 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.293165 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.293175 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.293184 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:44Z","lastTransitionTime":"2026-01-30T06:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.395073 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.395177 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.395191 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.395204 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.395213 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:44Z","lastTransitionTime":"2026-01-30T06:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.496765 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.496827 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.496838 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.496849 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.496857 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:44Z","lastTransitionTime":"2026-01-30T06:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.598139 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.598168 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.598177 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.598202 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.598210 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:44Z","lastTransitionTime":"2026-01-30T06:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.681308 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 07:33:34.823636188 +0000 UTC Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.685588 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.685616 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.685631 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:45:44 crc kubenswrapper[4520]: E0130 06:45:44.685696 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.685724 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:44 crc kubenswrapper[4520]: E0130 06:45:44.685802 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:45:44 crc kubenswrapper[4520]: E0130 06:45:44.685903 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:45:44 crc kubenswrapper[4520]: E0130 06:45:44.685978 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.699991 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.700018 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.700027 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.700039 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.700050 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:44Z","lastTransitionTime":"2026-01-30T06:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.801401 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.801423 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.801431 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.801439 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.801449 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:44Z","lastTransitionTime":"2026-01-30T06:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.902346 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.902373 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.902380 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.902389 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:44 crc kubenswrapper[4520]: I0130 06:45:44.902397 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:44Z","lastTransitionTime":"2026-01-30T06:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.003620 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.003644 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.003652 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.003663 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.003671 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:45Z","lastTransitionTime":"2026-01-30T06:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.105404 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.105431 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.105439 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.105449 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.105457 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:45Z","lastTransitionTime":"2026-01-30T06:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.207317 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.207358 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.207368 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.207381 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.207395 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:45Z","lastTransitionTime":"2026-01-30T06:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.309117 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.309157 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.309167 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.309182 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.309193 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:45Z","lastTransitionTime":"2026-01-30T06:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.410905 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.410941 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.410951 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.410964 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.410975 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:45Z","lastTransitionTime":"2026-01-30T06:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.512590 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.512625 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.512636 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.512649 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.512659 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:45Z","lastTransitionTime":"2026-01-30T06:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.614444 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.614468 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.614476 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.614489 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.614497 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:45Z","lastTransitionTime":"2026-01-30T06:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.682292 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 19:25:46.162829229 +0000 UTC Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.686081 4520 scope.go:117] "RemoveContainer" containerID="83bec6fbb06733bdb4237b84ef9807ba374424be1c39c100a82af30d3eba10b9" Jan 30 06:45:45 crc kubenswrapper[4520]: E0130 06:45:45.686299 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6tm5s_openshift-ovn-kubernetes(705f09bd-e1b6-47fd-83db-189fbe9a7b95)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.715968 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.716001 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.716010 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.716023 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.716031 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:45Z","lastTransitionTime":"2026-01-30T06:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.817271 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.817300 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.817315 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.817326 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.817335 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:45Z","lastTransitionTime":"2026-01-30T06:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.919214 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.919242 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.919251 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.919262 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:45 crc kubenswrapper[4520]: I0130 06:45:45.919270 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:45Z","lastTransitionTime":"2026-01-30T06:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.020857 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.020886 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.020894 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.020905 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.020913 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:46Z","lastTransitionTime":"2026-01-30T06:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.122794 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.122839 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.122849 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.122861 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.122868 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:46Z","lastTransitionTime":"2026-01-30T06:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.224726 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.224753 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.224763 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.224774 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.224785 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:46Z","lastTransitionTime":"2026-01-30T06:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.326907 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.326944 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.326953 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.326969 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.326984 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:46Z","lastTransitionTime":"2026-01-30T06:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.428488 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.428561 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.428573 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.428597 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.428607 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:46Z","lastTransitionTime":"2026-01-30T06:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.529920 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.529953 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.529963 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.529976 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.529993 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:46Z","lastTransitionTime":"2026-01-30T06:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.631734 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.631763 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.631771 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.631783 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.631793 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:46Z","lastTransitionTime":"2026-01-30T06:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.683378 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 18:36:42.547636672 +0000 UTC Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.684607 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.684628 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:45:46 crc kubenswrapper[4520]: E0130 06:45:46.684701 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.684771 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:45:46 crc kubenswrapper[4520]: E0130 06:45:46.684837 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:45:46 crc kubenswrapper[4520]: E0130 06:45:46.684886 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.684925 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:45:46 crc kubenswrapper[4520]: E0130 06:45:46.684968 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.693839 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:46Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.699878 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6th8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed0fb361-02d3-4a8d-90c6-2c386499c01f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3901f212dddc0d99128662fb56e09f6382b60847a630f4da8d2a272ca5064536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6th8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:46Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.707336 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:46Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.714166 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hf7k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aedbdb4a22aec02ade41b850034115ba0e6b584e2e7195b6ab548ef4291665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqhqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hf7k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:46Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.720846 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f51275-c0b1-4467-bf4a-ef848e3521df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e259c411b8e91626ab987a1ca449092d507e84f0e06c3cd291b6e8498099a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dkqtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:46Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.729586 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:46Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.732976 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.733005 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.733016 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.733028 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.733037 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:46Z","lastTransitionTime":"2026-01-30T06:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.736773 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:46Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.744120 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:46Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.752909 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:46Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.760273 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tkcc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d0da278-9de0-4cfe-8f2b-b15ce7445923\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33144075cc4b12176da829bf3fa8f8d11b6e56fae342a4cc12e28f2a83268cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwgkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc3e82fc5b1455769c2618e3e32f21d800d7f6d510cd344068dc3ac90ccb6a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwgkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tkcc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:46Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.773379 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:46Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.781764 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:46Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.792329 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee18b84b-4e10-42ed-ac93-557943206072\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417284b540e5095c86cbed539b48be5213483a2bc5e7947dd6a148fc6f45e551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kdqjc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:46Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.805459 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705f09bd-e1b6-47fd-83db-189fbe9a7b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83bec6fbb06733bdb4237b84ef9807ba374424be1c39c100a82af30d3eba10b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83bec6fbb06733bdb4237b84ef9807ba374424be1c39c100a82af30d3eba10b9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T06:45:32Z\\\",\\\"message\\\":\\\"onAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0130 06:45:32.356581 6080 lb_config.go:1031] Cluster endpoints for openshift-kube-apiserver-operator/metrics for network=default are: map[]\\\\nI0130 06:45:32.356588 6080 services_controller.go:443] Built service openshift-kube-apiserver-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.109\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0130 06:45:32.355871 6080 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-mn7g2\\\\nI0130 06:45:32.356601 6080 services_controller.go:444] Built service openshift-kube-apiserver-operator/metrics LB per-node configs for network=default: []services.l\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6tm5s_openshift-ovn-kubernetes(705f09bd-e1b6-47fd-83db-189fbe9a7b95)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tm5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:46Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.813129 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z5rcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e1a8ebe-5163-47dd-a320-a286c92971c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bdr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bdr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z5rcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:46Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.826946 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56fecd5a-4387-4e8d-b999-9b893d10dda8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20f365e319337b1d1c71d80b5631c2264c907a4b8c06d78c1e1c2ed64915fdfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7cfdbf2ac64a3089a349ad033770210d594956c8395afe2b65ece4cd9a234b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffb071ac9d3d42a711e23a6868eca346b62b7f4802226ed4283e895c1db00216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e33b3a1734c6dbfb28a8708410e6b63edaaa276054ebb52e1ae99efdeeb2cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e33b3a1734c6dbfb28a8708410e6b63edaaa276054ebb52e1ae99efdeeb2cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:46Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.834871 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.834901 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.834911 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.834925 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.834934 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:46Z","lastTransitionTime":"2026-01-30T06:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.836207 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:46Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.845658 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mn7g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhvlk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mn7g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:46Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.936984 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.937202 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.937212 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.937226 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:46 crc kubenswrapper[4520]: I0130 06:45:46.937234 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:46Z","lastTransitionTime":"2026-01-30T06:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.038693 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.038716 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.038725 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.038737 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.038745 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:47Z","lastTransitionTime":"2026-01-30T06:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.140009 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.140041 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.140052 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.140063 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.140073 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:47Z","lastTransitionTime":"2026-01-30T06:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.241901 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.241964 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.241976 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.241991 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.242002 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:47Z","lastTransitionTime":"2026-01-30T06:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.343606 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.343638 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.343645 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.343657 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.343665 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:47Z","lastTransitionTime":"2026-01-30T06:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.445398 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.445447 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.445456 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.445470 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.445480 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:47Z","lastTransitionTime":"2026-01-30T06:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.547846 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.547886 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.547897 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.547908 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.547916 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:47Z","lastTransitionTime":"2026-01-30T06:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.650284 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.650363 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.650375 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.650390 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.650400 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:47Z","lastTransitionTime":"2026-01-30T06:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.683725 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 06:44:36.065168828 +0000 UTC Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.752508 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.752563 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.752573 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.752584 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.752594 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:47Z","lastTransitionTime":"2026-01-30T06:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.853783 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.853820 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.853830 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.853844 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.853852 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:47Z","lastTransitionTime":"2026-01-30T06:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.955658 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.955685 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.955695 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.955705 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:47 crc kubenswrapper[4520]: I0130 06:45:47.955713 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:47Z","lastTransitionTime":"2026-01-30T06:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.057056 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.057185 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.057265 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.057349 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.057431 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:48Z","lastTransitionTime":"2026-01-30T06:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.159345 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.159378 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.159389 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.159402 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.159413 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:48Z","lastTransitionTime":"2026-01-30T06:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.261738 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.261770 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.261779 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.261791 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.261801 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:48Z","lastTransitionTime":"2026-01-30T06:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.340823 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.340846 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.340854 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.340868 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.340876 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:48Z","lastTransitionTime":"2026-01-30T06:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:48 crc kubenswrapper[4520]: E0130 06:45:48.350444 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:48Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.353242 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.353272 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.353281 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.353294 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.353302 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:48Z","lastTransitionTime":"2026-01-30T06:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:48 crc kubenswrapper[4520]: E0130 06:45:48.363026 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:48Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.365548 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.365633 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.365703 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.365773 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.365844 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:48Z","lastTransitionTime":"2026-01-30T06:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:48 crc kubenswrapper[4520]: E0130 06:45:48.373901 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:48Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.376906 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.377019 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.377105 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.377182 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.377260 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:48Z","lastTransitionTime":"2026-01-30T06:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:48 crc kubenswrapper[4520]: E0130 06:45:48.385145 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:48Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.387375 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.387470 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.387568 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.387649 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.387727 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:48Z","lastTransitionTime":"2026-01-30T06:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:48 crc kubenswrapper[4520]: E0130 06:45:48.395460 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:48Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:48 crc kubenswrapper[4520]: E0130 06:45:48.395763 4520 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.396948 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.397051 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.397130 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.397210 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.397261 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:48Z","lastTransitionTime":"2026-01-30T06:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.498356 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.498784 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.498855 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.498932 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.498981 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:48Z","lastTransitionTime":"2026-01-30T06:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.600288 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.600429 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.600614 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.600697 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.600774 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:48Z","lastTransitionTime":"2026-01-30T06:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.684025 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 02:28:15.200461124 +0000 UTC Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.685439 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.685603 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.685488 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:45:48 crc kubenswrapper[4520]: E0130 06:45:48.685760 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.685454 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:45:48 crc kubenswrapper[4520]: E0130 06:45:48.686204 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:45:48 crc kubenswrapper[4520]: E0130 06:45:48.686325 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:45:48 crc kubenswrapper[4520]: E0130 06:45:48.686444 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.702422 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.702450 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.702460 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.702471 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.702478 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:48Z","lastTransitionTime":"2026-01-30T06:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.804468 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.804618 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.804701 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.804771 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.804819 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:48Z","lastTransitionTime":"2026-01-30T06:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.906600 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.906774 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.906872 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.906962 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:48 crc kubenswrapper[4520]: I0130 06:45:48.907045 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:48Z","lastTransitionTime":"2026-01-30T06:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.008302 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.008339 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.008348 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.008364 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.008372 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:49Z","lastTransitionTime":"2026-01-30T06:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.110253 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.110297 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.110307 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.110327 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.110335 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:49Z","lastTransitionTime":"2026-01-30T06:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.212189 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.212219 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.212228 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.212240 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.212249 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:49Z","lastTransitionTime":"2026-01-30T06:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.313707 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.313731 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.313741 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.313751 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.313759 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:49Z","lastTransitionTime":"2026-01-30T06:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.415923 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.415954 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.415965 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.415976 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.415984 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:49Z","lastTransitionTime":"2026-01-30T06:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.517395 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.517423 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.517432 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.517442 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.517449 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:49Z","lastTransitionTime":"2026-01-30T06:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.619096 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.619128 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.619140 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.619153 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.619177 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:49Z","lastTransitionTime":"2026-01-30T06:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.684756 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 12:35:31.528398941 +0000 UTC Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.721127 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.721175 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.721185 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.721198 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.721206 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:49Z","lastTransitionTime":"2026-01-30T06:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.823155 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.823193 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.823202 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.823219 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.823228 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:49Z","lastTransitionTime":"2026-01-30T06:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.924468 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.924584 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.924661 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.924732 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:49 crc kubenswrapper[4520]: I0130 06:45:49.924793 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:49Z","lastTransitionTime":"2026-01-30T06:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.026249 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.026276 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.026287 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.026300 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.026310 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:50Z","lastTransitionTime":"2026-01-30T06:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.129072 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.129112 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.129120 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.129131 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.129140 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:50Z","lastTransitionTime":"2026-01-30T06:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.231617 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.231646 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.231664 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.231678 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.231688 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:50Z","lastTransitionTime":"2026-01-30T06:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.334067 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.334096 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.334106 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.334119 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.334131 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:50Z","lastTransitionTime":"2026-01-30T06:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.436146 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.436183 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.436192 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.436209 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.436220 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:50Z","lastTransitionTime":"2026-01-30T06:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.538305 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.538373 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.538385 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.538408 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.538421 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:50Z","lastTransitionTime":"2026-01-30T06:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.640765 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.640837 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.640848 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.640860 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.640870 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:50Z","lastTransitionTime":"2026-01-30T06:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.685564 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 08:49:44.290002973 +0000 UTC Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.685712 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.685744 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:45:50 crc kubenswrapper[4520]: E0130 06:45:50.685866 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.685907 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:50 crc kubenswrapper[4520]: E0130 06:45:50.685962 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:45:50 crc kubenswrapper[4520]: E0130 06:45:50.686031 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.686115 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:45:50 crc kubenswrapper[4520]: E0130 06:45:50.686180 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.742332 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.742369 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.742381 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.742399 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.742411 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:50Z","lastTransitionTime":"2026-01-30T06:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.844060 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.844109 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.844122 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.844136 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.844146 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:50Z","lastTransitionTime":"2026-01-30T06:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.946117 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.946144 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.946155 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.946168 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:50 crc kubenswrapper[4520]: I0130 06:45:50.946179 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:50Z","lastTransitionTime":"2026-01-30T06:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.047602 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.047633 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.047642 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.047654 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.047683 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:51Z","lastTransitionTime":"2026-01-30T06:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.150188 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.150218 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.150229 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.150241 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.150253 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:51Z","lastTransitionTime":"2026-01-30T06:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.252190 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.252213 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.252222 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.252235 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.252245 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:51Z","lastTransitionTime":"2026-01-30T06:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.354652 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.354684 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.354695 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.354706 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.354715 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:51Z","lastTransitionTime":"2026-01-30T06:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.456350 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.456388 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.456400 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.456412 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.456427 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:51Z","lastTransitionTime":"2026-01-30T06:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.558245 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.558283 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.558293 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.558305 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.558314 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:51Z","lastTransitionTime":"2026-01-30T06:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.659943 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.659995 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.660007 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.660022 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.660035 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:51Z","lastTransitionTime":"2026-01-30T06:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.686536 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 19:15:26.729206243 +0000 UTC Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.762208 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.762230 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.762240 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.762256 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.762265 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:51Z","lastTransitionTime":"2026-01-30T06:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.863762 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.863822 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.863835 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.863848 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.863858 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:51Z","lastTransitionTime":"2026-01-30T06:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.966055 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.966090 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.966101 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.966114 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:51 crc kubenswrapper[4520]: I0130 06:45:51.966124 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:51Z","lastTransitionTime":"2026-01-30T06:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.067754 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.067802 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.067813 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.067825 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.067835 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:52Z","lastTransitionTime":"2026-01-30T06:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.170015 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.170039 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.170047 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.170057 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.170066 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:52Z","lastTransitionTime":"2026-01-30T06:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.271796 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.271842 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.271854 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.271872 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.271884 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:52Z","lastTransitionTime":"2026-01-30T06:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.373662 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.373696 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.373706 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.373723 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.373736 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:52Z","lastTransitionTime":"2026-01-30T06:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.475566 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.475594 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.475602 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.475612 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.475620 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:52Z","lastTransitionTime":"2026-01-30T06:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.577421 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.577442 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.577450 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.577460 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.577467 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:52Z","lastTransitionTime":"2026-01-30T06:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.678851 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.678890 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.678901 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.678916 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.678926 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:52Z","lastTransitionTime":"2026-01-30T06:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.685119 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.685148 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.685165 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.685213 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.687063 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 19:35:31.95363915 +0000 UTC Jan 30 06:45:52 crc kubenswrapper[4520]: E0130 06:45:52.687342 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:45:52 crc kubenswrapper[4520]: E0130 06:45:52.688306 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:45:52 crc kubenswrapper[4520]: E0130 06:45:52.688579 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:45:52 crc kubenswrapper[4520]: E0130 06:45:52.688763 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.781178 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.781211 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.781223 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.781235 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.781271 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:52Z","lastTransitionTime":"2026-01-30T06:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.883148 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.883325 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.883397 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.883466 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.883549 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:52Z","lastTransitionTime":"2026-01-30T06:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.986012 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.986057 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.986074 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.986097 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:52 crc kubenswrapper[4520]: I0130 06:45:52.986111 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:52Z","lastTransitionTime":"2026-01-30T06:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.088404 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.088439 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.088451 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.088464 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.088473 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:53Z","lastTransitionTime":"2026-01-30T06:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.190738 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.190790 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.190803 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.190817 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.190828 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:53Z","lastTransitionTime":"2026-01-30T06:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.292646 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.292746 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.292817 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.292882 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.292937 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:53Z","lastTransitionTime":"2026-01-30T06:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.395136 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.395278 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.395344 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.395411 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.395475 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:53Z","lastTransitionTime":"2026-01-30T06:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.497635 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.497742 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.497806 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.497873 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.497934 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:53Z","lastTransitionTime":"2026-01-30T06:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.601055 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.601095 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.601105 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.601119 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.601133 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:53Z","lastTransitionTime":"2026-01-30T06:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.687554 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 15:05:28.026942951 +0000 UTC Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.702924 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.702966 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.702977 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.702990 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.703001 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:53Z","lastTransitionTime":"2026-01-30T06:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.805034 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.805075 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.805089 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.805110 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.805125 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:53Z","lastTransitionTime":"2026-01-30T06:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.907544 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.907808 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.907883 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.907963 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:53 crc kubenswrapper[4520]: I0130 06:45:53.908024 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:53Z","lastTransitionTime":"2026-01-30T06:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.010617 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.010667 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.010677 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.010691 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.010699 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:54Z","lastTransitionTime":"2026-01-30T06:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.113122 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.113156 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.113165 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.113177 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.113186 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:54Z","lastTransitionTime":"2026-01-30T06:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.214880 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.214904 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.214931 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.214943 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.214952 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:54Z","lastTransitionTime":"2026-01-30T06:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.316767 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.316805 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.316816 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.316832 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.316842 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:54Z","lastTransitionTime":"2026-01-30T06:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.406291 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e1a8ebe-5163-47dd-a320-a286c92971c2-metrics-certs\") pod \"network-metrics-daemon-z5rcx\" (UID: \"6e1a8ebe-5163-47dd-a320-a286c92971c2\") " pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:45:54 crc kubenswrapper[4520]: E0130 06:45:54.406408 4520 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 06:45:54 crc kubenswrapper[4520]: E0130 06:45:54.406455 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e1a8ebe-5163-47dd-a320-a286c92971c2-metrics-certs podName:6e1a8ebe-5163-47dd-a320-a286c92971c2 nodeName:}" failed. No retries permitted until 2026-01-30 06:46:26.406441442 +0000 UTC m=+100.034793623 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6e1a8ebe-5163-47dd-a320-a286c92971c2-metrics-certs") pod "network-metrics-daemon-z5rcx" (UID: "6e1a8ebe-5163-47dd-a320-a286c92971c2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.418235 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.418279 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.418288 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.418299 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.418307 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:54Z","lastTransitionTime":"2026-01-30T06:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.520704 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.520730 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.520738 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.520751 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.520761 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:54Z","lastTransitionTime":"2026-01-30T06:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.622437 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.622467 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.622476 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.622488 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.622498 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:54Z","lastTransitionTime":"2026-01-30T06:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.685048 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.685068 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.685158 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:45:54 crc kubenswrapper[4520]: E0130 06:45:54.685169 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.685199 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:45:54 crc kubenswrapper[4520]: E0130 06:45:54.685289 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:45:54 crc kubenswrapper[4520]: E0130 06:45:54.685621 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:45:54 crc kubenswrapper[4520]: E0130 06:45:54.685870 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.687744 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 14:22:58.51374741 +0000 UTC Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.724172 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.724208 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.724220 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.724234 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.724244 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:54Z","lastTransitionTime":"2026-01-30T06:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.825372 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.825415 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.825425 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.825437 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.825445 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:54Z","lastTransitionTime":"2026-01-30T06:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.927718 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.927745 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.927755 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.927766 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:54 crc kubenswrapper[4520]: I0130 06:45:54.927775 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:54Z","lastTransitionTime":"2026-01-30T06:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.029652 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.029679 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.029687 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.029700 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.029711 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:55Z","lastTransitionTime":"2026-01-30T06:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.131324 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.131365 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.131374 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.131389 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.131398 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:55Z","lastTransitionTime":"2026-01-30T06:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.234085 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.234110 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.234119 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.234128 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.234136 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:55Z","lastTransitionTime":"2026-01-30T06:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.335731 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.335760 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.335769 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.335784 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.335793 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:55Z","lastTransitionTime":"2026-01-30T06:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.437136 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.437164 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.437177 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.437188 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.437195 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:55Z","lastTransitionTime":"2026-01-30T06:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.538902 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.538943 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.538952 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.538967 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.538976 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:55Z","lastTransitionTime":"2026-01-30T06:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.641122 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.641147 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.641158 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.641173 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.641182 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:55Z","lastTransitionTime":"2026-01-30T06:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.687954 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 09:09:03.653569303 +0000 UTC Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.743303 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.743332 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.743350 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.743361 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.743369 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:55Z","lastTransitionTime":"2026-01-30T06:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.845077 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.845098 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.845106 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.845117 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.845126 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:55Z","lastTransitionTime":"2026-01-30T06:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.946743 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.946765 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.946774 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.946784 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.946791 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:55Z","lastTransitionTime":"2026-01-30T06:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.989538 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mn7g2_dfdf507d-4d3e-40ac-a9dc-c39c411f4c26/kube-multus/0.log" Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.989653 4520 generic.go:334] "Generic (PLEG): container finished" podID="dfdf507d-4d3e-40ac-a9dc-c39c411f4c26" containerID="fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545" exitCode=1 Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.989738 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mn7g2" event={"ID":"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26","Type":"ContainerDied","Data":"fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545"} Jan 30 06:45:55 crc kubenswrapper[4520]: I0130 06:45:55.990086 4520 scope.go:117] "RemoveContainer" containerID="fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.006383 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:56Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.017532 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:56Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.027963 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hf7k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aedbdb4a22aec02ade41b850034115ba0e6b584e2e7195b6ab548ef4291665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqhqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hf7k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:56Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.037369 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f51275-c0b1-4467-bf4a-ef848e3521df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e259c411b8e91626ab987a1ca449092d507e84f0e06c3cd291b6e8498099a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dkqtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:56Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.046177 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:56Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.048298 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.048324 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.048332 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.048352 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.048361 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:56Z","lastTransitionTime":"2026-01-30T06:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.055155 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:56Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.067358 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:56Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.080861 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705f09bd-e1b6-47fd-83db-189fbe9a7b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83bec6fbb06733bdb4237b84ef9807ba374424be1c39c100a82af30d3eba10b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83bec6fbb06733bdb4237b84ef9807ba374424be1c39c100a82af30d3eba10b9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T06:45:32Z\\\",\\\"message\\\":\\\"onAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0130 06:45:32.356581 6080 lb_config.go:1031] Cluster endpoints for openshift-kube-apiserver-operator/metrics for network=default are: map[]\\\\nI0130 06:45:32.356588 6080 services_controller.go:443] Built service openshift-kube-apiserver-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.109\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0130 06:45:32.355871 6080 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-mn7g2\\\\nI0130 06:45:32.356601 6080 services_controller.go:444] Built service openshift-kube-apiserver-operator/metrics LB per-node configs for network=default: []services.l\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6tm5s_openshift-ovn-kubernetes(705f09bd-e1b6-47fd-83db-189fbe9a7b95)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tm5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:56Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.087958 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tkcc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d0da278-9de0-4cfe-8f2b-b15ce7445923\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33144075cc4b12176da829bf3fa8f8d11b6e56fae342a4cc12e28f2a83268cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwgkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc3e82fc5b1455769c2618e3e32f21d800d7f6d510cd344068dc3ac90ccb6a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwgkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tkcc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:56Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.100136 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:56Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.108419 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:56Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.116857 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee18b84b-4e10-42ed-ac93-557943206072\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417284b540e5095c86cbed539b48be5213483a2bc5e7947dd6a148fc6f45e551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kdqjc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:56Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.125278 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mn7g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T06:45:55Z\\\",\\\"message\\\":\\\"2026-01-30T06:45:10+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2ec65152-7d7a-4032-a1d3-ef63ddcc03c7\\\\n2026-01-30T06:45:10+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2ec65152-7d7a-4032-a1d3-ef63ddcc03c7 to /host/opt/cni/bin/\\\\n2026-01-30T06:45:10Z [verbose] multus-daemon started\\\\n2026-01-30T06:45:10Z [verbose] Readiness Indicator file check\\\\n2026-01-30T06:45:55Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhvlk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mn7g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:56Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.132170 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z5rcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e1a8ebe-5163-47dd-a320-a286c92971c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bdr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bdr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z5rcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:56Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.140482 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56fecd5a-4387-4e8d-b999-9b893d10dda8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20f365e319337b1d1c71d80b5631c2264c907a4b8c06d78c1e1c2ed64915fdfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7cfdbf2ac64a3089a349ad033770210d594956c8395afe2b65ece4cd9a234b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffb071ac9d3d42a711e23a6868eca346b62b7f4802226ed4283e895c1db00216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e33b3a1734c6dbfb28a8708410e6b63edaaa276054ebb52e1ae99efdeeb2cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e33b3a1734c6dbfb28a8708410e6b63edaaa276054ebb52e1ae99efdeeb2cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:56Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.147818 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:56Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.149882 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.149902 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.149911 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.149923 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.149930 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:56Z","lastTransitionTime":"2026-01-30T06:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.155897 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:56Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.161444 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6th8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed0fb361-02d3-4a8d-90c6-2c386499c01f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3901f212dddc0d99128662fb56e09f6382b60847a630f4da8d2a272ca5064536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6th8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:56Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.251508 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.251543 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.251551 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.251561 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.251569 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:56Z","lastTransitionTime":"2026-01-30T06:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.353508 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.353549 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.353559 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.353570 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.353578 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:56Z","lastTransitionTime":"2026-01-30T06:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.454933 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.454960 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.454970 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.454981 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.454989 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:56Z","lastTransitionTime":"2026-01-30T06:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.558922 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.558965 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.558976 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.558992 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.559007 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:56Z","lastTransitionTime":"2026-01-30T06:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.661270 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.661311 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.661321 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.661337 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.661360 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:56Z","lastTransitionTime":"2026-01-30T06:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.685498 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.685583 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:45:56 crc kubenswrapper[4520]: E0130 06:45:56.685624 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.685638 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.685684 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:45:56 crc kubenswrapper[4520]: E0130 06:45:56.685817 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:45:56 crc kubenswrapper[4520]: E0130 06:45:56.685969 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:45:56 crc kubenswrapper[4520]: E0130 06:45:56.686157 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.686735 4520 scope.go:117] "RemoveContainer" containerID="83bec6fbb06733bdb4237b84ef9807ba374424be1c39c100a82af30d3eba10b9" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.688183 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 09:17:28.691867991 +0000 UTC Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.694303 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.701610 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705f09bd-e1b6-47fd-83db-189fbe9a7b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83bec6fbb06733bdb4237b84ef9807ba374424be1c39c100a82af30d3eba10b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83bec6fbb06733bdb4237b84ef9807ba374424be1c39c100a82af30d3eba10b9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T06:45:32Z\\\",\\\"message\\\":\\\"onAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0130 06:45:32.356581 6080 lb_config.go:1031] Cluster endpoints for openshift-kube-apiserver-operator/metrics for network=default are: map[]\\\\nI0130 06:45:32.356588 6080 services_controller.go:443] Built service openshift-kube-apiserver-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.109\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0130 06:45:32.355871 6080 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-mn7g2\\\\nI0130 06:45:32.356601 6080 services_controller.go:444] Built service openshift-kube-apiserver-operator/metrics LB per-node configs for network=default: []services.l\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6tm5s_openshift-ovn-kubernetes(705f09bd-e1b6-47fd-83db-189fbe9a7b95)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tm5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:56Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.712722 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tkcc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d0da278-9de0-4cfe-8f2b-b15ce7445923\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33144075cc4b12176da829bf3fa8f8d11b6e56fae342a4cc12e28f2a83268cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwgkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc3e82fc5b1455769c2618e3e32f21d800d7f6d510cd344068dc3ac90ccb6a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwgkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tkcc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:56Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.726681 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:56Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.736410 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:56Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.746327 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee18b84b-4e10-42ed-ac93-557943206072\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417284b540e5095c86cbed539b48be5213483a2bc5e7947dd6a148fc6f45e551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kdqjc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:56Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.755007 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mn7g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T06:45:55Z\\\",\\\"message\\\":\\\"2026-01-30T06:45:10+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2ec65152-7d7a-4032-a1d3-ef63ddcc03c7\\\\n2026-01-30T06:45:10+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2ec65152-7d7a-4032-a1d3-ef63ddcc03c7 to /host/opt/cni/bin/\\\\n2026-01-30T06:45:10Z [verbose] multus-daemon started\\\\n2026-01-30T06:45:10Z [verbose] Readiness Indicator file check\\\\n2026-01-30T06:45:55Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhvlk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mn7g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:56Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.762457 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z5rcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e1a8ebe-5163-47dd-a320-a286c92971c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bdr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bdr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z5rcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:56Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.763412 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.763446 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.763457 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.763472 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.763498 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:56Z","lastTransitionTime":"2026-01-30T06:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.770210 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56fecd5a-4387-4e8d-b999-9b893d10dda8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20f365e319337b1d1c71d80b5631c2264c907a4b8c06d78c1e1c2ed64915fdfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7cfdbf2ac64a3089a349ad033770210d594956c8395afe2b65ece4cd9a234b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffb071ac9d3d42a711e23a6868eca346b62b7f4802226ed4283e895c1db00216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e33b3a1734c6dbfb28a8708410e6b63edaaa276054ebb52e1ae99efdeeb2cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e33b3a1734c6dbfb28a8708410e6b63edaaa276054ebb52e1ae99efdeeb2cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:56Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.779330 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:56Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.787695 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:56Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.794308 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6th8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed0fb361-02d3-4a8d-90c6-2c386499c01f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3901f212dddc0d99128662fb56e09f6382b60847a630f4da8d2a272ca5064536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6th8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:56Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.802944 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:56Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.812393 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:56Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.820169 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hf7k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aedbdb4a22aec02ade41b850034115ba0e6b584e2e7195b6ab548ef4291665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqhqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hf7k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:56Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.829858 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f51275-c0b1-4467-bf4a-ef848e3521df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e259c411b8e91626ab987a1ca449092d507e84f0e06c3cd291b6e8498099a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dkqtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:56Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.838502 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:56Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.846290 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:56Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.853960 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:56Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.865276 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.865296 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.865306 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.865321 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.865331 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:56Z","lastTransitionTime":"2026-01-30T06:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.966602 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.966632 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.966641 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.966653 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.966662 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:56Z","lastTransitionTime":"2026-01-30T06:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:56 crc kubenswrapper[4520]: I0130 06:45:56.993538 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tm5s_705f09bd-e1b6-47fd-83db-189fbe9a7b95/ovnkube-controller/2.log" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.002226 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" event={"ID":"705f09bd-e1b6-47fd-83db-189fbe9a7b95","Type":"ContainerStarted","Data":"6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e"} Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.004868 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mn7g2_dfdf507d-4d3e-40ac-a9dc-c39c411f4c26/kube-multus/0.log" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.005101 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mn7g2" event={"ID":"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26","Type":"ContainerStarted","Data":"d835f1d19bf2442d881e665a0be837f0cd4e387cc45269e26a528de8b113de21"} Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.031651 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.045923 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6th8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed0fb361-02d3-4a8d-90c6-2c386499c01f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3901f212dddc0d99128662fb56e09f6382b60847a630f4da8d2a272ca5064536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6th8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.059625 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.068505 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.068566 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.068577 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.068590 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.068600 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:57Z","lastTransitionTime":"2026-01-30T06:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.069294 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.077815 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.085776 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.093539 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.100547 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hf7k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aedbdb4a22aec02ade41b850034115ba0e6b584e2e7195b6ab548ef4291665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqhqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hf7k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.107743 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f51275-c0b1-4467-bf4a-ef848e3521df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e259c411b8e91626ab987a1ca449092d507e84f0e06c3cd291b6e8498099a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dkqtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.119178 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.126964 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.136572 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee18b84b-4e10-42ed-ac93-557943206072\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417284b540e5095c86cbed539b48be5213483a2bc5e7947dd6a148fc6f45e551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kdqjc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.148986 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705f09bd-e1b6-47fd-83db-189fbe9a7b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83bec6fbb06733bdb4237b84ef9807ba374424be1c39c100a82af30d3eba10b9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T06:45:32Z\\\",\\\"message\\\":\\\"onAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0130 06:45:32.356581 6080 lb_config.go:1031] Cluster endpoints for openshift-kube-apiserver-operator/metrics for network=default are: map[]\\\\nI0130 06:45:32.356588 6080 services_controller.go:443] Built service openshift-kube-apiserver-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.109\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0130 06:45:32.355871 6080 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-mn7g2\\\\nI0130 06:45:32.356601 6080 services_controller.go:444] Built service openshift-kube-apiserver-operator/metrics LB per-node configs for network=default: []services.l\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tm5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.157949 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tkcc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d0da278-9de0-4cfe-8f2b-b15ce7445923\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33144075cc4b12176da829bf3fa8f8d11b6e56fae342a4cc12e28f2a83268cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwgkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc3e82fc5b1455769c2618e3e32f21d800d7f6d510cd344068dc3ac90ccb6a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwgkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tkcc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.165894 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56fecd5a-4387-4e8d-b999-9b893d10dda8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20f365e319337b1d1c71d80b5631c2264c907a4b8c06d78c1e1c2ed64915fdfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7cfdbf2ac64a3089a349ad033770210d594956c8395afe2b65ece4cd9a234b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffb071ac9d3d42a711e23a6868eca346b62b7f4802226ed4283e895c1db00216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e33b3a1734c6dbfb28a8708410e6b63edaaa276054ebb52e1ae99efdeeb2cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e33b3a1734c6dbfb28a8708410e6b63edaaa276054ebb52e1ae99efdeeb2cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.170418 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.170441 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.170470 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.170482 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.170491 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:57Z","lastTransitionTime":"2026-01-30T06:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.173794 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2de9fcdc-e1c8-4275-a53b-b0648a2327fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5785142c6cf161b6452de8efa5caafe1bd42705e2454274648f552108de7c84b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb4b80eaa5a81e0a2545293c9e5b5511d1385569c85e0ad7804758bae1725473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb4b80eaa5a81e0a2545293c9e5b5511d1385569c85e0ad7804758bae1725473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.182553 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.192441 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mn7g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T06:45:55Z\\\",\\\"message\\\":\\\"2026-01-30T06:45:10+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2ec65152-7d7a-4032-a1d3-ef63ddcc03c7\\\\n2026-01-30T06:45:10+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2ec65152-7d7a-4032-a1d3-ef63ddcc03c7 to /host/opt/cni/bin/\\\\n2026-01-30T06:45:10Z [verbose] multus-daemon started\\\\n2026-01-30T06:45:10Z [verbose] Readiness Indicator file check\\\\n2026-01-30T06:45:55Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhvlk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mn7g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.199071 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z5rcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e1a8ebe-5163-47dd-a320-a286c92971c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bdr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bdr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z5rcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.206918 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.214593 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6th8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed0fb361-02d3-4a8d-90c6-2c386499c01f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3901f212dddc0d99128662fb56e09f6382b60847a630f4da8d2a272ca5064536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6th8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.225825 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.235492 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.244331 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.253776 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.261227 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.268085 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hf7k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aedbdb4a22aec02ade41b850034115ba0e6b584e2e7195b6ab548ef4291665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqhqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hf7k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.272122 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.272164 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.272174 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.272191 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.272202 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:57Z","lastTransitionTime":"2026-01-30T06:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.276378 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f51275-c0b1-4467-bf4a-ef848e3521df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e259c411b8e91626ab987a1ca449092d507e84f0e06c3cd291b6e8498099a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dkqtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.293102 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.301190 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.313081 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee18b84b-4e10-42ed-ac93-557943206072\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417284b540e5095c86cbed539b48be5213483a2bc5e7947dd6a148fc6f45e551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kdqjc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.328639 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705f09bd-e1b6-47fd-83db-189fbe9a7b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83bec6fbb06733bdb4237b84ef9807ba374424be1c39c100a82af30d3eba10b9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T06:45:32Z\\\",\\\"message\\\":\\\"onAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0130 06:45:32.356581 6080 lb_config.go:1031] Cluster endpoints for openshift-kube-apiserver-operator/metrics for network=default are: map[]\\\\nI0130 06:45:32.356588 6080 services_controller.go:443] Built service openshift-kube-apiserver-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.109\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0130 06:45:32.355871 6080 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-mn7g2\\\\nI0130 06:45:32.356601 6080 services_controller.go:444] Built service openshift-kube-apiserver-operator/metrics LB per-node configs for network=default: []services.l\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tm5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.338905 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tkcc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d0da278-9de0-4cfe-8f2b-b15ce7445923\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33144075cc4b12176da829bf3fa8f8d11b6e56fae342a4cc12e28f2a83268cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwgkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc3e82fc5b1455769c2618e3e32f21d800d7f6d510cd344068dc3ac90ccb6a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwgkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tkcc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.347478 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56fecd5a-4387-4e8d-b999-9b893d10dda8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20f365e319337b1d1c71d80b5631c2264c907a4b8c06d78c1e1c2ed64915fdfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7cfdbf2ac64a3089a349ad033770210d594956c8395afe2b65ece4cd9a234b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffb071ac9d3d42a711e23a6868eca346b62b7f4802226ed4283e895c1db00216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e33b3a1734c6dbfb28a8708410e6b63edaaa276054ebb52e1ae99efdeeb2cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e33b3a1734c6dbfb28a8708410e6b63edaaa276054ebb52e1ae99efdeeb2cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.355903 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2de9fcdc-e1c8-4275-a53b-b0648a2327fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5785142c6cf161b6452de8efa5caafe1bd42705e2454274648f552108de7c84b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb4b80eaa5a81e0a2545293c9e5b5511d1385569c85e0ad7804758bae1725473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb4b80eaa5a81e0a2545293c9e5b5511d1385569c85e0ad7804758bae1725473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.365047 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.374266 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.374396 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.374469 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.374567 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.374642 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:57Z","lastTransitionTime":"2026-01-30T06:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.374745 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mn7g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d835f1d19bf2442d881e665a0be837f0cd4e387cc45269e26a528de8b113de21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T06:45:55Z\\\",\\\"message\\\":\\\"2026-01-30T06:45:10+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2ec65152-7d7a-4032-a1d3-ef63ddcc03c7\\\\n2026-01-30T06:45:10+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2ec65152-7d7a-4032-a1d3-ef63ddcc03c7 to /host/opt/cni/bin/\\\\n2026-01-30T06:45:10Z [verbose] multus-daemon started\\\\n2026-01-30T06:45:10Z [verbose] Readiness Indicator file check\\\\n2026-01-30T06:45:55Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhvlk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mn7g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.382092 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z5rcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e1a8ebe-5163-47dd-a320-a286c92971c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bdr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bdr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z5rcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.477266 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.477300 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.477310 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.477325 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.477336 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:57Z","lastTransitionTime":"2026-01-30T06:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.579086 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.579115 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.579125 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.579141 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.579151 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:57Z","lastTransitionTime":"2026-01-30T06:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.681407 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.681438 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.681447 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.681458 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.681467 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:57Z","lastTransitionTime":"2026-01-30T06:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.689068 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 07:10:13.430613108 +0000 UTC Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.783257 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.783405 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.783511 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.783614 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.783670 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:57Z","lastTransitionTime":"2026-01-30T06:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.884665 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.884767 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.884836 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.884896 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.884946 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:57Z","lastTransitionTime":"2026-01-30T06:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.986893 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.986935 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.986947 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.986966 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:57 crc kubenswrapper[4520]: I0130 06:45:57.986978 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:57Z","lastTransitionTime":"2026-01-30T06:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.008680 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tm5s_705f09bd-e1b6-47fd-83db-189fbe9a7b95/ovnkube-controller/3.log" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.009169 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tm5s_705f09bd-e1b6-47fd-83db-189fbe9a7b95/ovnkube-controller/2.log" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.012258 4520 generic.go:334] "Generic (PLEG): container finished" podID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerID="6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e" exitCode=1 Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.012293 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" event={"ID":"705f09bd-e1b6-47fd-83db-189fbe9a7b95","Type":"ContainerDied","Data":"6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e"} Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.012324 4520 scope.go:117] "RemoveContainer" containerID="83bec6fbb06733bdb4237b84ef9807ba374424be1c39c100a82af30d3eba10b9" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.012971 4520 scope.go:117] "RemoveContainer" containerID="6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e" Jan 30 06:45:58 crc kubenswrapper[4520]: E0130 06:45:58.013120 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6tm5s_openshift-ovn-kubernetes(705f09bd-e1b6-47fd-83db-189fbe9a7b95)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.022880 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:58Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.032336 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:58Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.040992 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hf7k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aedbdb4a22aec02ade41b850034115ba0e6b584e2e7195b6ab548ef4291665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqhqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hf7k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:58Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.049304 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f51275-c0b1-4467-bf4a-ef848e3521df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e259c411b8e91626ab987a1ca449092d507e84f0e06c3cd291b6e8498099a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dkqtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:58Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.060914 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:58Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.070855 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:58Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.079950 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:58Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.088642 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.088667 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.088678 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.088699 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.088711 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:58Z","lastTransitionTime":"2026-01-30T06:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.092675 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705f09bd-e1b6-47fd-83db-189fbe9a7b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83bec6fbb06733bdb4237b84ef9807ba374424be1c39c100a82af30d3eba10b9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T06:45:32Z\\\",\\\"message\\\":\\\"onAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0130 06:45:32.356581 6080 lb_config.go:1031] Cluster endpoints for openshift-kube-apiserver-operator/metrics for network=default are: map[]\\\\nI0130 06:45:32.356588 6080 services_controller.go:443] Built service openshift-kube-apiserver-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.109\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0130 06:45:32.355871 6080 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-mn7g2\\\\nI0130 06:45:32.356601 6080 services_controller.go:444] Built service openshift-kube-apiserver-operator/metrics LB per-node configs for network=default: []services.l\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T06:45:57Z\\\",\\\"message\\\":\\\"services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.244\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0130 06:45:57.387330 6458 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z]\\\\nI0130 06:45:57.387359 6458 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-hf7k5\\\\nI0130 06:45:57.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tm5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:58Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.100599 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tkcc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d0da278-9de0-4cfe-8f2b-b15ce7445923\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33144075cc4b12176da829bf3fa8f8d11b6e56fae342a4cc12e28f2a83268cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwgkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc3e82fc5b1455769c2618e3e32f21d800d7f6d510cd344068dc3ac90ccb6a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwgkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tkcc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:58Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.114438 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:58Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.125186 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:58Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.135728 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee18b84b-4e10-42ed-ac93-557943206072\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417284b540e5095c86cbed539b48be5213483a2bc5e7947dd6a148fc6f45e551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kdqjc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:58Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.162744 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mn7g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d835f1d19bf2442d881e665a0be837f0cd4e387cc45269e26a528de8b113de21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T06:45:55Z\\\",\\\"message\\\":\\\"2026-01-30T06:45:10+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2ec65152-7d7a-4032-a1d3-ef63ddcc03c7\\\\n2026-01-30T06:45:10+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2ec65152-7d7a-4032-a1d3-ef63ddcc03c7 to /host/opt/cni/bin/\\\\n2026-01-30T06:45:10Z [verbose] multus-daemon started\\\\n2026-01-30T06:45:10Z [verbose] Readiness Indicator file check\\\\n2026-01-30T06:45:55Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhvlk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mn7g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:58Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.177919 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z5rcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e1a8ebe-5163-47dd-a320-a286c92971c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bdr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bdr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z5rcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:58Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.191489 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.191542 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.191554 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.191575 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.191588 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:58Z","lastTransitionTime":"2026-01-30T06:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.199732 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56fecd5a-4387-4e8d-b999-9b893d10dda8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20f365e319337b1d1c71d80b5631c2264c907a4b8c06d78c1e1c2ed64915fdfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7cfdbf2ac64a3089a349ad033770210d594956c8395afe2b65ece4cd9a234b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffb071ac9d3d42a711e23a6868eca346b62b7f4802226ed4283e895c1db00216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e33b3a1734c6dbfb28a8708410e6b63edaaa276054ebb52e1ae99efdeeb2cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e33b3a1734c6dbfb28a8708410e6b63edaaa276054ebb52e1ae99efdeeb2cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:58Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.212562 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2de9fcdc-e1c8-4275-a53b-b0648a2327fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5785142c6cf161b6452de8efa5caafe1bd42705e2454274648f552108de7c84b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb4b80eaa5a81e0a2545293c9e5b5511d1385569c85e0ad7804758bae1725473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb4b80eaa5a81e0a2545293c9e5b5511d1385569c85e0ad7804758bae1725473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:58Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.224135 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:58Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.232053 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:58Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.238669 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6th8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed0fb361-02d3-4a8d-90c6-2c386499c01f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3901f212dddc0d99128662fb56e09f6382b60847a630f4da8d2a272ca5064536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6th8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:58Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.293071 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.293099 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.293108 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.293141 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.293152 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:58Z","lastTransitionTime":"2026-01-30T06:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.395898 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.395927 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.395936 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.395949 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.395958 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:58Z","lastTransitionTime":"2026-01-30T06:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.498033 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.498056 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.498065 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.498094 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.498101 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:58Z","lastTransitionTime":"2026-01-30T06:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.586997 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.587038 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.587049 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.587067 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.587078 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:58Z","lastTransitionTime":"2026-01-30T06:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:58 crc kubenswrapper[4520]: E0130 06:45:58.596739 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:58Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.601627 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.601653 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.601662 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.601674 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.601681 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:58Z","lastTransitionTime":"2026-01-30T06:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:58 crc kubenswrapper[4520]: E0130 06:45:58.612247 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:58Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.614992 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.615024 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.615034 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.615047 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.615056 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:58Z","lastTransitionTime":"2026-01-30T06:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:58 crc kubenswrapper[4520]: E0130 06:45:58.623546 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:58Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.626349 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.626379 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.626387 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.626402 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.626412 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:58Z","lastTransitionTime":"2026-01-30T06:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:58 crc kubenswrapper[4520]: E0130 06:45:58.639766 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:58Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.642017 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.642085 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.642096 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.642106 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.642114 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:58Z","lastTransitionTime":"2026-01-30T06:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:58 crc kubenswrapper[4520]: E0130 06:45:58.650971 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:58Z is after 2025-08-24T17:21:41Z" Jan 30 06:45:58 crc kubenswrapper[4520]: E0130 06:45:58.651078 4520 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.652070 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.652096 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.652106 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.652117 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.652125 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:58Z","lastTransitionTime":"2026-01-30T06:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.685579 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.685585 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:45:58 crc kubenswrapper[4520]: E0130 06:45:58.685727 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:45:58 crc kubenswrapper[4520]: E0130 06:45:58.685824 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.686097 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.686171 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:45:58 crc kubenswrapper[4520]: E0130 06:45:58.686335 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:45:58 crc kubenswrapper[4520]: E0130 06:45:58.686461 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.689263 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 17:08:33.220768198 +0000 UTC Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.753618 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.753656 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.753666 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.753676 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.753685 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:58Z","lastTransitionTime":"2026-01-30T06:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.855386 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.855420 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.855429 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.855443 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.855453 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:58Z","lastTransitionTime":"2026-01-30T06:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.957466 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.957492 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.957504 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.957537 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:58 crc kubenswrapper[4520]: I0130 06:45:58.957547 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:58Z","lastTransitionTime":"2026-01-30T06:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.016541 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tm5s_705f09bd-e1b6-47fd-83db-189fbe9a7b95/ovnkube-controller/3.log" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.059708 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.059757 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.059804 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.059818 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.059828 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:59Z","lastTransitionTime":"2026-01-30T06:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.161865 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.161896 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.161907 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.161922 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.161933 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:59Z","lastTransitionTime":"2026-01-30T06:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.264354 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.264409 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.264421 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.264436 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.264445 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:59Z","lastTransitionTime":"2026-01-30T06:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.366057 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.366082 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.366095 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.366121 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.366131 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:59Z","lastTransitionTime":"2026-01-30T06:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.468406 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.468456 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.468472 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.468495 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.468508 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:59Z","lastTransitionTime":"2026-01-30T06:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.570827 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.570880 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.570891 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.570906 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.570917 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:59Z","lastTransitionTime":"2026-01-30T06:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.672402 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.672433 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.672444 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.672457 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.672466 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:59Z","lastTransitionTime":"2026-01-30T06:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.689962 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 15:32:39.447347525 +0000 UTC Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.774353 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.774427 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.774446 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.774471 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.774488 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:59Z","lastTransitionTime":"2026-01-30T06:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.876751 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.876784 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.876794 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.876805 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.876814 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:59Z","lastTransitionTime":"2026-01-30T06:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.978664 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.978694 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.978707 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.978722 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:45:59 crc kubenswrapper[4520]: I0130 06:45:59.978731 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:45:59Z","lastTransitionTime":"2026-01-30T06:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.080442 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.080660 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.080672 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.080690 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.080702 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:00Z","lastTransitionTime":"2026-01-30T06:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.182612 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.182633 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.182641 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.182652 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.182660 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:00Z","lastTransitionTime":"2026-01-30T06:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.284052 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.284090 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.284099 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.284109 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.284117 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:00Z","lastTransitionTime":"2026-01-30T06:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.386150 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.386185 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.386197 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.386210 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.386219 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:00Z","lastTransitionTime":"2026-01-30T06:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.487593 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.487612 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.487622 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.487633 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.487640 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:00Z","lastTransitionTime":"2026-01-30T06:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.589857 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.589918 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.589930 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.589955 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.589967 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:00Z","lastTransitionTime":"2026-01-30T06:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.685058 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.685092 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.685064 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:46:00 crc kubenswrapper[4520]: E0130 06:46:00.685169 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.685279 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:46:00 crc kubenswrapper[4520]: E0130 06:46:00.685412 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:46:00 crc kubenswrapper[4520]: E0130 06:46:00.685583 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:46:00 crc kubenswrapper[4520]: E0130 06:46:00.685727 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.690562 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 11:36:54.701362515 +0000 UTC Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.691664 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.691700 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.691712 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.691730 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.691742 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:00Z","lastTransitionTime":"2026-01-30T06:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.793598 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.793628 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.793647 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.793659 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.793668 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:00Z","lastTransitionTime":"2026-01-30T06:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.895867 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.895887 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.895896 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.895906 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:00 crc kubenswrapper[4520]: I0130 06:46:00.895914 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:00Z","lastTransitionTime":"2026-01-30T06:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.000332 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.000365 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.000388 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.000416 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.000429 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:01Z","lastTransitionTime":"2026-01-30T06:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.102979 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.103010 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.103019 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.103035 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.103046 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:01Z","lastTransitionTime":"2026-01-30T06:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.204834 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.204871 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.204883 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.204901 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.204913 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:01Z","lastTransitionTime":"2026-01-30T06:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.306880 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.306911 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.306919 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.306934 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.306944 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:01Z","lastTransitionTime":"2026-01-30T06:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.409098 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.409201 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.409277 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.409355 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.409436 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:01Z","lastTransitionTime":"2026-01-30T06:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.511820 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.511860 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.511870 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.511886 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.511895 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:01Z","lastTransitionTime":"2026-01-30T06:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.613478 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.613500 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.613535 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.613547 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.613555 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:01Z","lastTransitionTime":"2026-01-30T06:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.691418 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 13:34:14.094182756 +0000 UTC Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.715904 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.715979 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.716049 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.716103 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.716147 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:01Z","lastTransitionTime":"2026-01-30T06:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.817896 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.817921 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.817931 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.817942 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.817950 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:01Z","lastTransitionTime":"2026-01-30T06:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.920090 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.920120 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.920130 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.920140 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:01 crc kubenswrapper[4520]: I0130 06:46:01.920148 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:01Z","lastTransitionTime":"2026-01-30T06:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.022138 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.022170 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.022184 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.022197 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.022206 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:02Z","lastTransitionTime":"2026-01-30T06:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.094580 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.095198 4520 scope.go:117] "RemoveContainer" containerID="6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e" Jan 30 06:46:02 crc kubenswrapper[4520]: E0130 06:46:02.095342 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6tm5s_openshift-ovn-kubernetes(705f09bd-e1b6-47fd-83db-189fbe9a7b95)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.105615 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:02Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.113892 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:02Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.121293 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:02Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.123540 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.123562 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.123571 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.123583 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.123595 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:02Z","lastTransitionTime":"2026-01-30T06:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.128966 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:02Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.136202 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:02Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.142474 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hf7k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aedbdb4a22aec02ade41b850034115ba0e6b584e2e7195b6ab548ef4291665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqhqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hf7k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:02Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.152946 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f51275-c0b1-4467-bf4a-ef848e3521df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e259c411b8e91626ab987a1ca449092d507e84f0e06c3cd291b6e8498099a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dkqtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:02Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.165569 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:02Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.174313 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:02Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.183813 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee18b84b-4e10-42ed-ac93-557943206072\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417284b540e5095c86cbed539b48be5213483a2bc5e7947dd6a148fc6f45e551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kdqjc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:02Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.196833 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705f09bd-e1b6-47fd-83db-189fbe9a7b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T06:45:57Z\\\",\\\"message\\\":\\\"services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.244\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0130 06:45:57.387330 6458 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z]\\\\nI0130 06:45:57.387359 6458 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-hf7k5\\\\nI0130 06:45:57.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6tm5s_openshift-ovn-kubernetes(705f09bd-e1b6-47fd-83db-189fbe9a7b95)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tm5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:02Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.204053 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tkcc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d0da278-9de0-4cfe-8f2b-b15ce7445923\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33144075cc4b12176da829bf3fa8f8d11b6e56fae342a4cc12e28f2a83268cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwgkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc3e82fc5b1455769c2618e3e32f21d800d7f6d510cd344068dc3ac90ccb6a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwgkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tkcc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:02Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.211350 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56fecd5a-4387-4e8d-b999-9b893d10dda8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20f365e319337b1d1c71d80b5631c2264c907a4b8c06d78c1e1c2ed64915fdfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7cfdbf2ac64a3089a349ad033770210d594956c8395afe2b65ece4cd9a234b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffb071ac9d3d42a711e23a6868eca346b62b7f4802226ed4283e895c1db00216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e33b3a1734c6dbfb28a8708410e6b63edaaa276054ebb52e1ae99efdeeb2cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e33b3a1734c6dbfb28a8708410e6b63edaaa276054ebb52e1ae99efdeeb2cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:02Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.217637 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2de9fcdc-e1c8-4275-a53b-b0648a2327fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5785142c6cf161b6452de8efa5caafe1bd42705e2454274648f552108de7c84b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb4b80eaa5a81e0a2545293c9e5b5511d1385569c85e0ad7804758bae1725473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb4b80eaa5a81e0a2545293c9e5b5511d1385569c85e0ad7804758bae1725473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:02Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.225566 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.225593 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.225604 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.225625 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.225637 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:02Z","lastTransitionTime":"2026-01-30T06:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.226309 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:02Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.234779 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mn7g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d835f1d19bf2442d881e665a0be837f0cd4e387cc45269e26a528de8b113de21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T06:45:55Z\\\",\\\"message\\\":\\\"2026-01-30T06:45:10+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2ec65152-7d7a-4032-a1d3-ef63ddcc03c7\\\\n2026-01-30T06:45:10+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2ec65152-7d7a-4032-a1d3-ef63ddcc03c7 to /host/opt/cni/bin/\\\\n2026-01-30T06:45:10Z [verbose] multus-daemon started\\\\n2026-01-30T06:45:10Z [verbose] Readiness Indicator file check\\\\n2026-01-30T06:45:55Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhvlk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mn7g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:02Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.243038 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z5rcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e1a8ebe-5163-47dd-a320-a286c92971c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bdr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bdr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z5rcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:02Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.252340 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:02Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.260471 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6th8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed0fb361-02d3-4a8d-90c6-2c386499c01f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3901f212dddc0d99128662fb56e09f6382b60847a630f4da8d2a272ca5064536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6th8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:02Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.327381 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.327418 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.327432 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.327452 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.327462 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:02Z","lastTransitionTime":"2026-01-30T06:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.429672 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.429714 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.429725 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.429742 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.429753 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:02Z","lastTransitionTime":"2026-01-30T06:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.531353 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.531389 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.531399 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.531424 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.531436 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:02Z","lastTransitionTime":"2026-01-30T06:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.632801 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.632921 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.632976 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.633037 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.633103 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:02Z","lastTransitionTime":"2026-01-30T06:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.684604 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.684653 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.684602 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:46:02 crc kubenswrapper[4520]: E0130 06:46:02.684713 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.684808 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:46:02 crc kubenswrapper[4520]: E0130 06:46:02.684855 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:46:02 crc kubenswrapper[4520]: E0130 06:46:02.685036 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:46:02 crc kubenswrapper[4520]: E0130 06:46:02.685115 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.691579 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 23:15:34.708312546 +0000 UTC Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.735093 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.735122 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.735133 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.735148 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.735158 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:02Z","lastTransitionTime":"2026-01-30T06:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.837020 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.837054 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.837064 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.837078 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.837089 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:02Z","lastTransitionTime":"2026-01-30T06:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.938985 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.939109 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.939172 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.939227 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:02 crc kubenswrapper[4520]: I0130 06:46:02.939283 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:02Z","lastTransitionTime":"2026-01-30T06:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.041027 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.041056 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.041065 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.041077 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.041085 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:03Z","lastTransitionTime":"2026-01-30T06:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.143057 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.143103 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.143114 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.143127 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.143136 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:03Z","lastTransitionTime":"2026-01-30T06:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.244947 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.244999 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.245013 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.245033 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.245044 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:03Z","lastTransitionTime":"2026-01-30T06:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.346918 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.346960 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.346973 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.346988 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.347004 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:03Z","lastTransitionTime":"2026-01-30T06:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.449037 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.449088 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.449100 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.449112 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.449121 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:03Z","lastTransitionTime":"2026-01-30T06:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.551004 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.551030 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.551039 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.551069 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.551079 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:03Z","lastTransitionTime":"2026-01-30T06:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.652865 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.652902 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.652919 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.652933 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.652940 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:03Z","lastTransitionTime":"2026-01-30T06:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.691704 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 02:17:22.343606793 +0000 UTC Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.755011 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.755039 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.755050 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.755079 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.755090 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:03Z","lastTransitionTime":"2026-01-30T06:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.856540 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.856569 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.856578 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.856591 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.856600 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:03Z","lastTransitionTime":"2026-01-30T06:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.958306 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.958332 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.958340 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.958352 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:03 crc kubenswrapper[4520]: I0130 06:46:03.958360 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:03Z","lastTransitionTime":"2026-01-30T06:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.060146 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.060197 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.060211 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.060224 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.060232 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:04Z","lastTransitionTime":"2026-01-30T06:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.161481 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.161509 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.161533 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.161546 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.161556 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:04Z","lastTransitionTime":"2026-01-30T06:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.263295 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.263348 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.263360 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.263380 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.263394 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:04Z","lastTransitionTime":"2026-01-30T06:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.365492 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.365543 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.365555 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.365568 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.365576 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:04Z","lastTransitionTime":"2026-01-30T06:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.467130 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.467174 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.467184 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.467197 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.467206 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:04Z","lastTransitionTime":"2026-01-30T06:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.568692 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.568722 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.568735 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.568746 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.568754 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:04Z","lastTransitionTime":"2026-01-30T06:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.670960 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.671004 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.671015 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.671038 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.671054 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:04Z","lastTransitionTime":"2026-01-30T06:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.685443 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.685471 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.685451 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.685548 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:46:04 crc kubenswrapper[4520]: E0130 06:46:04.685641 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:46:04 crc kubenswrapper[4520]: E0130 06:46:04.685712 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:46:04 crc kubenswrapper[4520]: E0130 06:46:04.685806 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:46:04 crc kubenswrapper[4520]: E0130 06:46:04.685859 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.692256 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 07:36:50.40292509 +0000 UTC Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.772875 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.772917 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.772928 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.772944 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.772955 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:04Z","lastTransitionTime":"2026-01-30T06:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.874840 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.874889 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.874902 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.874915 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.874924 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:04Z","lastTransitionTime":"2026-01-30T06:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.976636 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.976665 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.976674 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.976687 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:04 crc kubenswrapper[4520]: I0130 06:46:04.976698 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:04Z","lastTransitionTime":"2026-01-30T06:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.078887 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.078923 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.078934 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.078945 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.078953 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:05Z","lastTransitionTime":"2026-01-30T06:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.180374 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.180417 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.180425 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.180445 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.180454 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:05Z","lastTransitionTime":"2026-01-30T06:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.282582 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.282617 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.282626 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.282640 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.282651 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:05Z","lastTransitionTime":"2026-01-30T06:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.384586 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.384609 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.384618 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.384628 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.384636 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:05Z","lastTransitionTime":"2026-01-30T06:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.486455 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.486620 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.486649 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.486662 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.486669 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:05Z","lastTransitionTime":"2026-01-30T06:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.588975 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.589001 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.589012 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.589024 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.589031 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:05Z","lastTransitionTime":"2026-01-30T06:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.690439 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.690472 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.690480 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.690490 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.690499 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:05Z","lastTransitionTime":"2026-01-30T06:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.693148 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 18:04:44.048906404 +0000 UTC Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.792424 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.792469 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.792480 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.792495 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.792504 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:05Z","lastTransitionTime":"2026-01-30T06:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.894599 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.894622 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.894630 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.894641 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.894649 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:05Z","lastTransitionTime":"2026-01-30T06:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.996559 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.996619 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.996631 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.996653 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:05 crc kubenswrapper[4520]: I0130 06:46:05.996666 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:05Z","lastTransitionTime":"2026-01-30T06:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.098783 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.098816 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.098825 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.098836 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.098844 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:06Z","lastTransitionTime":"2026-01-30T06:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.201325 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.201353 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.201362 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.201373 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.201382 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:06Z","lastTransitionTime":"2026-01-30T06:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.303200 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.303259 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.303271 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.303510 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.303623 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:06Z","lastTransitionTime":"2026-01-30T06:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.406000 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.406049 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.406060 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.406071 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.406078 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:06Z","lastTransitionTime":"2026-01-30T06:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.507682 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.507793 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.507857 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.507920 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.507977 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:06Z","lastTransitionTime":"2026-01-30T06:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.609812 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.609873 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.609886 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.609912 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.609924 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:06Z","lastTransitionTime":"2026-01-30T06:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.685488 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.685580 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.685601 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:46:06 crc kubenswrapper[4520]: E0130 06:46:06.685702 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.685736 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:46:06 crc kubenswrapper[4520]: E0130 06:46:06.685827 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:46:06 crc kubenswrapper[4520]: E0130 06:46:06.685909 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:46:06 crc kubenswrapper[4520]: E0130 06:46:06.685957 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.693402 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 21:24:10.543710367 +0000 UTC Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.695240 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.701397 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6th8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed0fb361-02d3-4a8d-90c6-2c386499c01f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3901f212dddc0d99128662fb56e09f6382b60847a630f4da8d2a272ca5064536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6th8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.709107 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.711053 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.711080 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.711088 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.711104 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.711114 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:06Z","lastTransitionTime":"2026-01-30T06:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.716319 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.722335 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hf7k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aedbdb4a22aec02ade41b850034115ba0e6b584e2e7195b6ab548ef4291665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqhqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hf7k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.730097 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f51275-c0b1-4467-bf4a-ef848e3521df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e259c411b8e91626ab987a1ca449092d507e84f0e06c3cd291b6e8498099a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dkqtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.739545 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.750393 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.764467 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.779182 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705f09bd-e1b6-47fd-83db-189fbe9a7b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T06:45:57Z\\\",\\\"message\\\":\\\"services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.244\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0130 06:45:57.387330 6458 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z]\\\\nI0130 06:45:57.387359 6458 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-hf7k5\\\\nI0130 06:45:57.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6tm5s_openshift-ovn-kubernetes(705f09bd-e1b6-47fd-83db-189fbe9a7b95)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tm5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.787395 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tkcc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d0da278-9de0-4cfe-8f2b-b15ce7445923\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33144075cc4b12176da829bf3fa8f8d11b6e56fae342a4cc12e28f2a83268cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwgkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc3e82fc5b1455769c2618e3e32f21d800d7f6d510cd344068dc3ac90ccb6a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwgkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tkcc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.800684 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.811292 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.812979 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.813037 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.813050 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.813067 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.813076 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:06Z","lastTransitionTime":"2026-01-30T06:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.821622 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee18b84b-4e10-42ed-ac93-557943206072\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417284b540e5095c86cbed539b48be5213483a2bc5e7947dd6a148fc6f45e551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kdqjc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.830685 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mn7g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d835f1d19bf2442d881e665a0be837f0cd4e387cc45269e26a528de8b113de21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T06:45:55Z\\\",\\\"message\\\":\\\"2026-01-30T06:45:10+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2ec65152-7d7a-4032-a1d3-ef63ddcc03c7\\\\n2026-01-30T06:45:10+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2ec65152-7d7a-4032-a1d3-ef63ddcc03c7 to /host/opt/cni/bin/\\\\n2026-01-30T06:45:10Z [verbose] multus-daemon started\\\\n2026-01-30T06:45:10Z [verbose] Readiness Indicator file check\\\\n2026-01-30T06:45:55Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhvlk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mn7g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.837785 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z5rcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e1a8ebe-5163-47dd-a320-a286c92971c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bdr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bdr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z5rcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.845629 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56fecd5a-4387-4e8d-b999-9b893d10dda8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20f365e319337b1d1c71d80b5631c2264c907a4b8c06d78c1e1c2ed64915fdfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7cfdbf2ac64a3089a349ad033770210d594956c8395afe2b65ece4cd9a234b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffb071ac9d3d42a711e23a6868eca346b62b7f4802226ed4283e895c1db00216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e33b3a1734c6dbfb28a8708410e6b63edaaa276054ebb52e1ae99efdeeb2cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e33b3a1734c6dbfb28a8708410e6b63edaaa276054ebb52e1ae99efdeeb2cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.852807 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2de9fcdc-e1c8-4275-a53b-b0648a2327fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5785142c6cf161b6452de8efa5caafe1bd42705e2454274648f552108de7c84b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb4b80eaa5a81e0a2545293c9e5b5511d1385569c85e0ad7804758bae1725473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb4b80eaa5a81e0a2545293c9e5b5511d1385569c85e0ad7804758bae1725473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.861411 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:06Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.915281 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.915307 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.915317 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.915331 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:06 crc kubenswrapper[4520]: I0130 06:46:06.915341 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:06Z","lastTransitionTime":"2026-01-30T06:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.017243 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.017290 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.017302 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.017320 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.017332 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:07Z","lastTransitionTime":"2026-01-30T06:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.119006 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.119042 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.119051 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.119065 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.119075 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:07Z","lastTransitionTime":"2026-01-30T06:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.221363 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.221391 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.221400 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.221414 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.221422 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:07Z","lastTransitionTime":"2026-01-30T06:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.323102 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.323160 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.323176 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.323198 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.323213 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:07Z","lastTransitionTime":"2026-01-30T06:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.425082 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.425116 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.425125 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.425139 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.425149 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:07Z","lastTransitionTime":"2026-01-30T06:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.526927 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.526965 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.526975 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.526993 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.527003 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:07Z","lastTransitionTime":"2026-01-30T06:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.628718 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.628743 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.628751 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.628760 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.628768 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:07Z","lastTransitionTime":"2026-01-30T06:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.693495 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 19:08:34.419590543 +0000 UTC Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.730651 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.730680 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.730689 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.730699 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.730707 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:07Z","lastTransitionTime":"2026-01-30T06:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.832396 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.832430 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.832442 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.832455 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.832475 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:07Z","lastTransitionTime":"2026-01-30T06:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.933850 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.933873 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.933882 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.933893 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:07 crc kubenswrapper[4520]: I0130 06:46:07.933901 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:07Z","lastTransitionTime":"2026-01-30T06:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.036089 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.036117 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.036125 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.036135 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.036143 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:08Z","lastTransitionTime":"2026-01-30T06:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.137959 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.137999 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.138009 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.138024 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.138033 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:08Z","lastTransitionTime":"2026-01-30T06:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.239740 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.239783 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.239794 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.239811 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.239822 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:08Z","lastTransitionTime":"2026-01-30T06:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.341555 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.341588 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.341600 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.341612 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.341620 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:08Z","lastTransitionTime":"2026-01-30T06:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.423109 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:46:08 crc kubenswrapper[4520]: E0130 06:46:08.425221 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:12.425194834 +0000 UTC m=+146.053547015 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.442886 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.442920 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.442929 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.442942 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.442950 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:08Z","lastTransitionTime":"2026-01-30T06:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.525759 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.525791 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.525811 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.525827 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:46:08 crc kubenswrapper[4520]: E0130 06:46:08.525923 4520 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 06:46:08 crc kubenswrapper[4520]: E0130 06:46:08.525938 4520 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 06:46:08 crc kubenswrapper[4520]: E0130 06:46:08.525947 4520 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 06:46:08 crc kubenswrapper[4520]: E0130 06:46:08.525984 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 06:47:12.525969554 +0000 UTC m=+146.154321725 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 06:46:08 crc kubenswrapper[4520]: E0130 06:46:08.525980 4520 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 06:46:08 crc kubenswrapper[4520]: E0130 06:46:08.526061 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 06:47:12.526037161 +0000 UTC m=+146.154389362 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 06:46:08 crc kubenswrapper[4520]: E0130 06:46:08.525979 4520 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 06:46:08 crc kubenswrapper[4520]: E0130 06:46:08.526099 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 06:47:12.526092937 +0000 UTC m=+146.154445148 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 06:46:08 crc kubenswrapper[4520]: E0130 06:46:08.526353 4520 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 06:46:08 crc kubenswrapper[4520]: E0130 06:46:08.526423 4520 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 06:46:08 crc kubenswrapper[4520]: E0130 06:46:08.526486 4520 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 06:46:08 crc kubenswrapper[4520]: E0130 06:46:08.526610 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 06:47:12.526594363 +0000 UTC m=+146.154946534 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.544548 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.544577 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.544586 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.544597 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.544604 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:08Z","lastTransitionTime":"2026-01-30T06:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.646565 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.646599 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.646609 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.646621 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.646629 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:08Z","lastTransitionTime":"2026-01-30T06:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.685021 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.685075 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.685026 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:46:08 crc kubenswrapper[4520]: E0130 06:46:08.685150 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:46:08 crc kubenswrapper[4520]: E0130 06:46:08.685229 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:46:08 crc kubenswrapper[4520]: E0130 06:46:08.685289 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.685405 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:46:08 crc kubenswrapper[4520]: E0130 06:46:08.685563 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.693777 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 07:49:46.712393979 +0000 UTC Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.748385 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.748441 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.748453 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.748465 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.748483 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:08Z","lastTransitionTime":"2026-01-30T06:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.849692 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.849720 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.849729 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.849742 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.849751 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:08Z","lastTransitionTime":"2026-01-30T06:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.852641 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.852751 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.852769 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.852784 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.852795 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:08Z","lastTransitionTime":"2026-01-30T06:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:08 crc kubenswrapper[4520]: E0130 06:46:08.862351 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:08Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.864700 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.864725 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.864733 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.864743 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.864750 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:08Z","lastTransitionTime":"2026-01-30T06:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:08 crc kubenswrapper[4520]: E0130 06:46:08.872690 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:08Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.874822 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.874849 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.874860 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.874871 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.874878 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:08Z","lastTransitionTime":"2026-01-30T06:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:08 crc kubenswrapper[4520]: E0130 06:46:08.882941 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:08Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.885248 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.885278 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.885289 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.885300 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.885308 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:08Z","lastTransitionTime":"2026-01-30T06:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:08 crc kubenswrapper[4520]: E0130 06:46:08.893245 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:08Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.895625 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.895654 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.895663 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.895674 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.895682 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:08Z","lastTransitionTime":"2026-01-30T06:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:08 crc kubenswrapper[4520]: E0130 06:46:08.903947 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:08Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:08 crc kubenswrapper[4520]: E0130 06:46:08.904169 4520 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.951299 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.951328 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.951339 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.951350 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:08 crc kubenswrapper[4520]: I0130 06:46:08.951357 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:08Z","lastTransitionTime":"2026-01-30T06:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.053244 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.053270 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.053279 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.053288 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.053296 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:09Z","lastTransitionTime":"2026-01-30T06:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.154744 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.154875 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.154932 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.154998 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.155054 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:09Z","lastTransitionTime":"2026-01-30T06:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.256214 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.256304 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.256390 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.256454 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.256540 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:09Z","lastTransitionTime":"2026-01-30T06:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.358630 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.358662 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.358673 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.358685 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.358695 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:09Z","lastTransitionTime":"2026-01-30T06:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.460808 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.460910 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.460976 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.461046 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.461103 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:09Z","lastTransitionTime":"2026-01-30T06:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.562506 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.562598 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.562615 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.562636 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.562649 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:09Z","lastTransitionTime":"2026-01-30T06:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.665062 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.665092 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.665100 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.665111 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.665119 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:09Z","lastTransitionTime":"2026-01-30T06:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.694227 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 11:16:15.481537347 +0000 UTC Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.767361 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.767423 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.767432 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.767446 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.767453 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:09Z","lastTransitionTime":"2026-01-30T06:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.869538 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.869560 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.869569 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.869581 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.869589 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:09Z","lastTransitionTime":"2026-01-30T06:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.971203 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.971233 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.971244 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.971254 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:09 crc kubenswrapper[4520]: I0130 06:46:09.971263 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:09Z","lastTransitionTime":"2026-01-30T06:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.072866 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.072896 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.072904 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.072916 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.072925 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:10Z","lastTransitionTime":"2026-01-30T06:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.174673 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.174699 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.174708 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.174718 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.174726 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:10Z","lastTransitionTime":"2026-01-30T06:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.276479 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.276511 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.276535 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.276545 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.276554 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:10Z","lastTransitionTime":"2026-01-30T06:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.377697 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.377721 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.377731 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.377743 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.377751 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:10Z","lastTransitionTime":"2026-01-30T06:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.479550 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.479571 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.479580 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.479590 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.479599 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:10Z","lastTransitionTime":"2026-01-30T06:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.581396 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.581416 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.581424 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.581435 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.581448 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:10Z","lastTransitionTime":"2026-01-30T06:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.682529 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.682550 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.682559 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.682569 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.682577 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:10Z","lastTransitionTime":"2026-01-30T06:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.684832 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:46:10 crc kubenswrapper[4520]: E0130 06:46:10.684911 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.684935 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:46:10 crc kubenswrapper[4520]: E0130 06:46:10.685002 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.685024 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:46:10 crc kubenswrapper[4520]: E0130 06:46:10.685065 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.685131 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:46:10 crc kubenswrapper[4520]: E0130 06:46:10.685197 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.694698 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 03:45:50.369059896 +0000 UTC Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.784421 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.784446 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.784454 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.784464 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.784473 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:10Z","lastTransitionTime":"2026-01-30T06:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.886710 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.886731 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.886741 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.886750 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.886757 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:10Z","lastTransitionTime":"2026-01-30T06:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.988253 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.988293 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.988303 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.988335 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:10 crc kubenswrapper[4520]: I0130 06:46:10.988344 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:10Z","lastTransitionTime":"2026-01-30T06:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.089768 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.089817 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.089827 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.089846 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.089856 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:11Z","lastTransitionTime":"2026-01-30T06:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.191684 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.191709 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.191718 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.191728 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.191736 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:11Z","lastTransitionTime":"2026-01-30T06:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.294105 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.294244 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.294309 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.294371 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.294549 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:11Z","lastTransitionTime":"2026-01-30T06:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.396071 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.396112 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.396124 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.396141 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.396151 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:11Z","lastTransitionTime":"2026-01-30T06:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.497628 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.497657 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.497667 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.497678 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.497687 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:11Z","lastTransitionTime":"2026-01-30T06:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.599708 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.599737 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.599746 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.599758 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.599765 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:11Z","lastTransitionTime":"2026-01-30T06:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.695759 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 12:31:26.297251442 +0000 UTC Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.700713 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.700737 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.700746 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.700755 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.700762 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:11Z","lastTransitionTime":"2026-01-30T06:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.802745 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.802778 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.802787 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.802801 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.802809 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:11Z","lastTransitionTime":"2026-01-30T06:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.904659 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.904694 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.904703 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.904713 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:11 crc kubenswrapper[4520]: I0130 06:46:11.904721 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:11Z","lastTransitionTime":"2026-01-30T06:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.006131 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.006159 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.006167 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.006180 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.006191 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:12Z","lastTransitionTime":"2026-01-30T06:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.107725 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.107776 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.107790 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.107807 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.107819 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:12Z","lastTransitionTime":"2026-01-30T06:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.209976 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.210006 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.210014 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.210024 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.210032 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:12Z","lastTransitionTime":"2026-01-30T06:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.311713 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.311740 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.311750 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.311761 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.311768 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:12Z","lastTransitionTime":"2026-01-30T06:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.413754 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.413786 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.413793 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.413807 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.413815 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:12Z","lastTransitionTime":"2026-01-30T06:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.514959 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.514986 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.514993 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.515002 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.515011 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:12Z","lastTransitionTime":"2026-01-30T06:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.616275 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.616296 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.616305 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.616315 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.616322 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:12Z","lastTransitionTime":"2026-01-30T06:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.685440 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:46:12 crc kubenswrapper[4520]: E0130 06:46:12.685560 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.685591 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.685627 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.685651 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:46:12 crc kubenswrapper[4520]: E0130 06:46:12.685780 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:46:12 crc kubenswrapper[4520]: E0130 06:46:12.685844 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:46:12 crc kubenswrapper[4520]: E0130 06:46:12.685921 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.695895 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 07:14:18.560000091 +0000 UTC Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.717476 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.717502 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.717538 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.717549 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.717557 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:12Z","lastTransitionTime":"2026-01-30T06:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.818963 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.818980 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.818989 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.818998 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.819006 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:12Z","lastTransitionTime":"2026-01-30T06:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.920538 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.920569 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.920579 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.920592 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:12 crc kubenswrapper[4520]: I0130 06:46:12.920600 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:12Z","lastTransitionTime":"2026-01-30T06:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.022620 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.022652 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.022660 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.022672 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.022681 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:13Z","lastTransitionTime":"2026-01-30T06:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.124688 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.124719 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.124729 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.124739 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.124749 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:13Z","lastTransitionTime":"2026-01-30T06:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.226497 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.226554 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.226564 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.226577 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.226585 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:13Z","lastTransitionTime":"2026-01-30T06:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.328428 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.328495 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.328505 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.328549 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.328561 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:13Z","lastTransitionTime":"2026-01-30T06:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.430374 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.430398 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.430407 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.430418 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.430425 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:13Z","lastTransitionTime":"2026-01-30T06:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.531475 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.531502 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.531511 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.531560 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.531569 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:13Z","lastTransitionTime":"2026-01-30T06:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.633218 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.633241 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.633249 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.633259 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.633265 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:13Z","lastTransitionTime":"2026-01-30T06:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.696274 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 19:41:43.154979137 +0000 UTC Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.735181 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.735211 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.735222 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.735233 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.735241 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:13Z","lastTransitionTime":"2026-01-30T06:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.837020 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.837062 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.837072 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.837083 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.837089 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:13Z","lastTransitionTime":"2026-01-30T06:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.938679 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.938702 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.938711 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.938721 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:13 crc kubenswrapper[4520]: I0130 06:46:13.938727 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:13Z","lastTransitionTime":"2026-01-30T06:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.040676 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.040709 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.040719 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.040732 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.040741 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:14Z","lastTransitionTime":"2026-01-30T06:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.142073 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.142093 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.142101 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.142110 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.142117 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:14Z","lastTransitionTime":"2026-01-30T06:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.247506 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.247553 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.247562 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.247572 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.247579 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:14Z","lastTransitionTime":"2026-01-30T06:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.349230 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.349260 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.349268 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.349279 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.349289 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:14Z","lastTransitionTime":"2026-01-30T06:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.450329 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.450356 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.450366 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.450376 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.450386 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:14Z","lastTransitionTime":"2026-01-30T06:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.551643 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.551665 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.551672 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.551681 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.551687 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:14Z","lastTransitionTime":"2026-01-30T06:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.652679 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.652710 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.652718 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.652727 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.652748 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:14Z","lastTransitionTime":"2026-01-30T06:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.685284 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.685293 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.685471 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:46:14 crc kubenswrapper[4520]: E0130 06:46:14.685369 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:46:14 crc kubenswrapper[4520]: E0130 06:46:14.685744 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.685851 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:46:14 crc kubenswrapper[4520]: E0130 06:46:14.685904 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:46:14 crc kubenswrapper[4520]: E0130 06:46:14.685995 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.697114 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 15:23:15.381222797 +0000 UTC Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.754141 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.754162 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.754170 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.754180 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.754192 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:14Z","lastTransitionTime":"2026-01-30T06:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.855443 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.855473 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.855482 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.855493 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.855501 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:14Z","lastTransitionTime":"2026-01-30T06:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.957387 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.957412 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.957421 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.957431 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:14 crc kubenswrapper[4520]: I0130 06:46:14.957438 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:14Z","lastTransitionTime":"2026-01-30T06:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.058663 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.058689 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.058696 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.058707 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.058715 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:15Z","lastTransitionTime":"2026-01-30T06:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.159844 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.159873 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.159883 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.159895 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.159902 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:15Z","lastTransitionTime":"2026-01-30T06:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.261303 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.261326 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.261333 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.261343 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.261349 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:15Z","lastTransitionTime":"2026-01-30T06:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.363044 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.363070 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.363078 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.363086 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.363093 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:15Z","lastTransitionTime":"2026-01-30T06:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.464784 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.464810 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.464818 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.464828 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.464837 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:15Z","lastTransitionTime":"2026-01-30T06:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.566750 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.566778 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.566787 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.566797 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.566805 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:15Z","lastTransitionTime":"2026-01-30T06:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.668254 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.668286 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.668295 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.668310 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.668318 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:15Z","lastTransitionTime":"2026-01-30T06:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.685042 4520 scope.go:117] "RemoveContainer" containerID="6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e" Jan 30 06:46:15 crc kubenswrapper[4520]: E0130 06:46:15.685185 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6tm5s_openshift-ovn-kubernetes(705f09bd-e1b6-47fd-83db-189fbe9a7b95)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.698138 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 03:59:32.32424082 +0000 UTC Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.769636 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.769663 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.769671 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.769680 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.769687 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:15Z","lastTransitionTime":"2026-01-30T06:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.871464 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.871512 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.871574 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.871596 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.871611 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:15Z","lastTransitionTime":"2026-01-30T06:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.973362 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.973390 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.973397 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.973408 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:15 crc kubenswrapper[4520]: I0130 06:46:15.973415 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:15Z","lastTransitionTime":"2026-01-30T06:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.075079 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.075102 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.075110 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.075119 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.075141 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:16Z","lastTransitionTime":"2026-01-30T06:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.176348 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.176394 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.176409 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.176425 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.176437 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:16Z","lastTransitionTime":"2026-01-30T06:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.278455 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.278487 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.278499 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.278511 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.278547 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:16Z","lastTransitionTime":"2026-01-30T06:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.380340 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.380369 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.380377 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.380386 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.380393 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:16Z","lastTransitionTime":"2026-01-30T06:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.481985 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.482014 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.482022 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.482032 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.482057 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:16Z","lastTransitionTime":"2026-01-30T06:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.583986 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.584015 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.584023 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.584035 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.584045 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:16Z","lastTransitionTime":"2026-01-30T06:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.684621 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.684675 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:46:16 crc kubenswrapper[4520]: E0130 06:46:16.684726 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.684733 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.684622 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:46:16 crc kubenswrapper[4520]: E0130 06:46:16.684790 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:46:16 crc kubenswrapper[4520]: E0130 06:46:16.684865 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:46:16 crc kubenswrapper[4520]: E0130 06:46:16.684932 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.685263 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.685288 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.685299 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.685309 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.685317 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:16Z","lastTransitionTime":"2026-01-30T06:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.693955 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hf7k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1449aaf1-dd5f-42a6-89e3-5cd09937b8a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5aedbdb4a22aec02ade41b850034115ba0e6b584e2e7195b6ab548ef4291665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqhqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hf7k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.698622 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 04:19:56.743525498 +0000 UTC Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.702348 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f51275-c0b1-4467-bf4a-ef848e3521df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e259c411b8e91626ab987a1ca449092d507e84f0e06c3cd291b6e8498099a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc2qj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dkqtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.710761 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0ff960a-01ac-4427-a870-5a981ff4628f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 06:44:58.884331 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 06:44:58.885569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2773797061/tls.crt::/tmp/serving-cert-2773797061/tls.key\\\\\\\"\\\\nI0130 06:45:04.225722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 06:45:04.230055 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 06:45:04.230073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 06:45:04.230274 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 06:45:04.230284 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 06:45:04.234463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0130 06:45:04.234465 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 06:45:04.234492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234496 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 06:45:04.234500 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 06:45:04.234502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 06:45:04.234506 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 06:45:04.234508 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 06:45:04.235913 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.719010 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.726491 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://825d7701b78c68a781b7b006ada54619862b4e4777963d863848aea1bc59e18c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4506c9de9560d0f25641895cad2485c8f7cc83ff756fe729f57a62f59181e48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.733994 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.741326 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c90355dcda2cbb923c6de20ef4bebb5be3f14a6bcff71b664445f0689961ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.754741 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7723909c-e6d6-4174-aa52-a25a8729e596\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c068db0217da8374627bab0e8931674cce2d0272ef8e9ed8450ac3069db11d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a45fce0f5e1737297faa9cc3bb7076cf0030bf0117dd4a852f3f0a287911cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58567088f889eb4332ffb6103399143024cea9ba41ae2d1276c760e0953a090d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5df60de2483b524d07691f715140e7089c9e3857cfa98310c1d942a96a711892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2ed1478a8838ee108192b8a47a09c03da25e79a728c1324e8d6f23541b45ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a020ffdd10c429ac809391ad128e2e189304ead8f7b7a6834754af9473d285ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://007778127a0e47cd70264db6a97c901b3a8286ea2be5fd499c73e09ec03b47b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535ba7116decd000937170b5df6e5ad5a76319d459b49444001b56fafd773434\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.770033 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cf22e03-047f-487d-8f13-a0b2643caca1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b51027471ec52c3860266d5c4e7b1b2f280867adf0ea5507c13daa8ae5a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7168ec27ef647ab19e300c2481102ab681027c4db7f200824549c1230e27df97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0669b189d8d1992c3c511a20191a074d65ecaf5c87b7a938960d7397c0a8974\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.779510 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee18b84b-4e10-42ed-ac93-557943206072\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417284b540e5095c86cbed539b48be5213483a2bc5e7947dd6a148fc6f45e551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3640ae9c2bb1c9a9d322637ba72c47ec1778346d2c03b431207498a826fb6deb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37cea4e2de71c58145ed9948c9991c2f5e84856a635cbb0beb8aeedef80792c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7d20e41df7ed595f929c824c5808479bb5935f037afaeecd032663d4d14f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b6d5566d7df6b8ac65de80b2b3cdfc54843edc35d6671eed30114434fd6dd0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bd9f7cffb9339dbad57701a910067f54aa4ff1677baab3108c8d0f6d59aafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec7144cc84e66f998676f4c2dfe7cc2bb69d2bcb70dda213d89bfe0c89af3d7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kdqjc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.787153 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.787181 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.787189 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.787201 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.787208 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:16Z","lastTransitionTime":"2026-01-30T06:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.790733 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705f09bd-e1b6-47fd-83db-189fbe9a7b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T06:45:57Z\\\",\\\"message\\\":\\\"services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.244\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0130 06:45:57.387330 6458 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:45:57Z is after 2025-08-24T17:21:41Z]\\\\nI0130 06:45:57.387359 6458 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-hf7k5\\\\nI0130 06:45:57.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6tm5s_openshift-ovn-kubernetes(705f09bd-e1b6-47fd-83db-189fbe9a7b95)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:45:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc94g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tm5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.797818 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tkcc8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d0da278-9de0-4cfe-8f2b-b15ce7445923\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33144075cc4b12176da829bf3fa8f8d11b6e56fae342a4cc12e28f2a83268cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwgkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc3e82fc5b1455769c2618e3e32f21d800d7f6d510cd344068dc3ac90ccb6a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwgkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tkcc8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.804717 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56fecd5a-4387-4e8d-b999-9b893d10dda8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20f365e319337b1d1c71d80b5631c2264c907a4b8c06d78c1e1c2ed64915fdfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7cfdbf2ac64a3089a349ad033770210d594956c8395afe2b65ece4cd9a234b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffb071ac9d3d42a711e23a6868eca346b62b7f4802226ed4283e895c1db00216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e33b3a1734c6dbfb28a8708410e6b63edaaa276054ebb52e1ae99efdeeb2cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e33b3a1734c6dbfb28a8708410e6b63edaaa276054ebb52e1ae99efdeeb2cf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.811137 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2de9fcdc-e1c8-4275-a53b-b0648a2327fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5785142c6cf161b6452de8efa5caafe1bd42705e2454274648f552108de7c84b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb4b80eaa5a81e0a2545293c9e5b5511d1385569c85e0ad7804758bae1725473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb4b80eaa5a81e0a2545293c9e5b5511d1385569c85e0ad7804758bae1725473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T06:44:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T06:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:44:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.819014 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb52f0d855b9c2f2a38dc9652b9835b9431c3dc29210e7822e8f1e43bcf6203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.826818 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mn7g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d835f1d19bf2442d881e665a0be837f0cd4e387cc45269e26a528de8b113de21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T06:45:55Z\\\",\\\"message\\\":\\\"2026-01-30T06:45:10+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2ec65152-7d7a-4032-a1d3-ef63ddcc03c7\\\\n2026-01-30T06:45:10+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2ec65152-7d7a-4032-a1d3-ef63ddcc03c7 to /host/opt/cni/bin/\\\\n2026-01-30T06:45:10Z [verbose] multus-daemon started\\\\n2026-01-30T06:45:10Z [verbose] Readiness Indicator file check\\\\n2026-01-30T06:45:55Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T06:45:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhvlk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mn7g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.833257 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z5rcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e1a8ebe-5163-47dd-a320-a286c92971c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bdr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2bdr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z5rcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.841399 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.847876 4520 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t6th8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed0fb361-02d3-4a8d-90c6-2c386499c01f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T06:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3901f212dddc0d99128662fb56e09f6382b60847a630f4da8d2a272ca5064536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T06:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T06:45:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t6th8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:16Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.888994 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.889021 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.889031 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.889044 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.889052 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:16Z","lastTransitionTime":"2026-01-30T06:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.990211 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.990242 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.990252 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.990264 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:16 crc kubenswrapper[4520]: I0130 06:46:16.990273 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:16Z","lastTransitionTime":"2026-01-30T06:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.091271 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.091296 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.091304 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.091324 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.091331 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:17Z","lastTransitionTime":"2026-01-30T06:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.192333 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.192359 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.192368 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.192378 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.192385 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:17Z","lastTransitionTime":"2026-01-30T06:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.294024 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.294044 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.294052 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.294061 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.294068 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:17Z","lastTransitionTime":"2026-01-30T06:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.395783 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.395805 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.395813 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.395822 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.395830 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:17Z","lastTransitionTime":"2026-01-30T06:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.497665 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.497718 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.497733 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.497752 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.497765 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:17Z","lastTransitionTime":"2026-01-30T06:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.599315 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.599336 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.599344 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.599353 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.599361 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:17Z","lastTransitionTime":"2026-01-30T06:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.699475 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 04:38:50.649114555 +0000 UTC Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.700583 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.700604 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.700612 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.700622 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.700629 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:17Z","lastTransitionTime":"2026-01-30T06:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.802317 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.802363 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.802373 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.802384 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.802396 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:17Z","lastTransitionTime":"2026-01-30T06:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.903960 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.904000 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.904009 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.904017 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:17 crc kubenswrapper[4520]: I0130 06:46:17.904023 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:17Z","lastTransitionTime":"2026-01-30T06:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.006246 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.006276 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.006284 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.006296 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.006305 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:18Z","lastTransitionTime":"2026-01-30T06:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.107614 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.107716 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.107786 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.107858 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.107911 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:18Z","lastTransitionTime":"2026-01-30T06:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.209328 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.209353 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.209361 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.209372 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.209380 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:18Z","lastTransitionTime":"2026-01-30T06:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.311508 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.311574 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.311586 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.311599 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.311610 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:18Z","lastTransitionTime":"2026-01-30T06:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.413723 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.413754 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.413763 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.413774 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.413801 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:18Z","lastTransitionTime":"2026-01-30T06:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.515018 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.515047 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.515055 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.515066 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.515078 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:18Z","lastTransitionTime":"2026-01-30T06:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.617027 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.617061 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.617071 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.617084 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.617093 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:18Z","lastTransitionTime":"2026-01-30T06:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.685247 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.685297 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:46:18 crc kubenswrapper[4520]: E0130 06:46:18.685338 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:46:18 crc kubenswrapper[4520]: E0130 06:46:18.685387 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.685445 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:46:18 crc kubenswrapper[4520]: E0130 06:46:18.685497 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.685595 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:46:18 crc kubenswrapper[4520]: E0130 06:46:18.685647 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.700213 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 18:41:20.830101287 +0000 UTC Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.718362 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.718383 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.718392 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.718406 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.718414 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:18Z","lastTransitionTime":"2026-01-30T06:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.820437 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.820464 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.820474 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.820488 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.820499 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:18Z","lastTransitionTime":"2026-01-30T06:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.922016 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.922039 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.922046 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.922055 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:18 crc kubenswrapper[4520]: I0130 06:46:18.922222 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:18Z","lastTransitionTime":"2026-01-30T06:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.023375 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.023404 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.023412 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.023421 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.023428 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:19Z","lastTransitionTime":"2026-01-30T06:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.090992 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.091040 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.091049 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.091058 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.091065 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:19Z","lastTransitionTime":"2026-01-30T06:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:19 crc kubenswrapper[4520]: E0130 06:46:19.103589 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:19Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.105932 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.105973 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.105984 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.105996 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.106004 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:19Z","lastTransitionTime":"2026-01-30T06:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:19 crc kubenswrapper[4520]: E0130 06:46:19.114417 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:19Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.116285 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.116309 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.116317 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.116327 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.116335 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:19Z","lastTransitionTime":"2026-01-30T06:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:19 crc kubenswrapper[4520]: E0130 06:46:19.124294 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:19Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.126476 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.126500 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.126508 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.126530 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.126537 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:19Z","lastTransitionTime":"2026-01-30T06:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:19 crc kubenswrapper[4520]: E0130 06:46:19.133615 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:19Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.135510 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.135558 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.135567 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.135577 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.135583 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:19Z","lastTransitionTime":"2026-01-30T06:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:19 crc kubenswrapper[4520]: E0130 06:46:19.142547 4520 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T06:46:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"28bb964a-9c71-4787-ad40-4262dd439958\\\",\\\"systemUUID\\\":\\\"4674bc25-0afd-48cd-9644-935726ab41fb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T06:46:19Z is after 2025-08-24T17:21:41Z" Jan 30 06:46:19 crc kubenswrapper[4520]: E0130 06:46:19.142639 4520 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.143454 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.143482 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.143490 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.143506 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.143525 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:19Z","lastTransitionTime":"2026-01-30T06:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.245217 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.245243 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.245252 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.245261 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.245269 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:19Z","lastTransitionTime":"2026-01-30T06:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.346391 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.346422 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.346431 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.346443 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.346450 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:19Z","lastTransitionTime":"2026-01-30T06:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.449796 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.449824 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.449835 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.449847 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.449856 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:19Z","lastTransitionTime":"2026-01-30T06:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.551631 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.551690 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.551705 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.551724 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.551737 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:19Z","lastTransitionTime":"2026-01-30T06:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.653406 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.653441 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.653450 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.653462 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.653470 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:19Z","lastTransitionTime":"2026-01-30T06:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.701100 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 08:34:09.906672739 +0000 UTC Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.755366 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.755396 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.755405 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.755415 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.755423 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:19Z","lastTransitionTime":"2026-01-30T06:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.857440 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.857492 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.857507 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.857564 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.857579 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:19Z","lastTransitionTime":"2026-01-30T06:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.959080 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.959106 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.959114 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.959123 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:19 crc kubenswrapper[4520]: I0130 06:46:19.959130 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:19Z","lastTransitionTime":"2026-01-30T06:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.060833 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.060873 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.060885 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.060899 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.060909 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:20Z","lastTransitionTime":"2026-01-30T06:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.162161 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.162364 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.162436 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.162498 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.162599 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:20Z","lastTransitionTime":"2026-01-30T06:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.263926 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.264074 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.264137 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.264202 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.264268 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:20Z","lastTransitionTime":"2026-01-30T06:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.365871 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.365895 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.365901 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.365910 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.365918 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:20Z","lastTransitionTime":"2026-01-30T06:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.467737 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.467767 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.467774 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.467800 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.467809 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:20Z","lastTransitionTime":"2026-01-30T06:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.569408 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.569433 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.569442 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.569456 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.569463 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:20Z","lastTransitionTime":"2026-01-30T06:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.670931 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.670966 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.670979 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.670993 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.671005 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:20Z","lastTransitionTime":"2026-01-30T06:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.685526 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.685573 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.685587 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.685618 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:46:20 crc kubenswrapper[4520]: E0130 06:46:20.685674 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:46:20 crc kubenswrapper[4520]: E0130 06:46:20.685742 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:46:20 crc kubenswrapper[4520]: E0130 06:46:20.685852 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:46:20 crc kubenswrapper[4520]: E0130 06:46:20.685801 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.701802 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 13:26:04.886386741 +0000 UTC Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.773571 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.773816 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.773828 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.773843 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.773851 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:20Z","lastTransitionTime":"2026-01-30T06:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.876684 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.876710 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.876719 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.876731 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.876739 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:20Z","lastTransitionTime":"2026-01-30T06:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.979371 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.979401 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.979409 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.979422 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:20 crc kubenswrapper[4520]: I0130 06:46:20.979431 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:20Z","lastTransitionTime":"2026-01-30T06:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.080651 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.080695 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.080704 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.080717 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.080725 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:21Z","lastTransitionTime":"2026-01-30T06:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.182842 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.182886 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.182904 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.182918 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.182926 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:21Z","lastTransitionTime":"2026-01-30T06:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.284759 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.285003 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.285078 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.285143 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.285199 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:21Z","lastTransitionTime":"2026-01-30T06:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.386498 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.386629 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.386767 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.386825 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.386873 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:21Z","lastTransitionTime":"2026-01-30T06:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.488314 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.488347 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.488357 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.488371 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.488380 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:21Z","lastTransitionTime":"2026-01-30T06:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.589625 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.589755 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.589832 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.589916 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.589979 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:21Z","lastTransitionTime":"2026-01-30T06:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.691306 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.691327 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.691335 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.691343 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.691368 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:21Z","lastTransitionTime":"2026-01-30T06:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.702692 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 15:43:23.98849427 +0000 UTC Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.792564 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.792605 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.792612 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.792621 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.792629 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:21Z","lastTransitionTime":"2026-01-30T06:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.894426 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.894452 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.894460 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.894470 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.894477 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:21Z","lastTransitionTime":"2026-01-30T06:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.995662 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.995713 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.995722 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.995732 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:21 crc kubenswrapper[4520]: I0130 06:46:21.995741 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:21Z","lastTransitionTime":"2026-01-30T06:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.097037 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.097057 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.097064 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.097073 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.097079 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:22Z","lastTransitionTime":"2026-01-30T06:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.198859 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.198962 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.199023 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.199087 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.199153 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:22Z","lastTransitionTime":"2026-01-30T06:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.300359 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.300383 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.300391 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.300399 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.300406 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:22Z","lastTransitionTime":"2026-01-30T06:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.401487 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.401508 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.401529 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.401538 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.401545 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:22Z","lastTransitionTime":"2026-01-30T06:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.502663 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.502763 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.502834 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.502890 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.502953 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:22Z","lastTransitionTime":"2026-01-30T06:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.604708 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.604742 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.604751 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.604760 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.604767 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:22Z","lastTransitionTime":"2026-01-30T06:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.685592 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.685622 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:46:22 crc kubenswrapper[4520]: E0130 06:46:22.685680 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.685700 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.685757 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:46:22 crc kubenswrapper[4520]: E0130 06:46:22.685808 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:46:22 crc kubenswrapper[4520]: E0130 06:46:22.685867 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:46:22 crc kubenswrapper[4520]: E0130 06:46:22.685906 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.702776 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 14:05:47.151316087 +0000 UTC Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.705788 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.705817 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.705826 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.705835 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.705842 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:22Z","lastTransitionTime":"2026-01-30T06:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.808008 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.808033 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.808058 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.808084 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.808092 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:22Z","lastTransitionTime":"2026-01-30T06:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.909666 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.909703 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.909712 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.909726 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:22 crc kubenswrapper[4520]: I0130 06:46:22.909737 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:22Z","lastTransitionTime":"2026-01-30T06:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.011238 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.011270 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.011280 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.011293 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.011305 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:23Z","lastTransitionTime":"2026-01-30T06:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.113479 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.113508 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.113537 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.113551 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.113558 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:23Z","lastTransitionTime":"2026-01-30T06:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.215091 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.215117 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.215125 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.215137 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.215146 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:23Z","lastTransitionTime":"2026-01-30T06:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.316373 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.316394 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.316402 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.317134 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.317146 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:23Z","lastTransitionTime":"2026-01-30T06:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.418405 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.418428 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.418436 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.418444 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.418451 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:23Z","lastTransitionTime":"2026-01-30T06:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.520036 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.520074 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.520083 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.520093 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.520100 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:23Z","lastTransitionTime":"2026-01-30T06:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.622356 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.622392 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.622400 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.622413 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.622423 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:23Z","lastTransitionTime":"2026-01-30T06:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.703331 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 05:41:33.290186153 +0000 UTC Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.723629 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.723692 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.723704 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.723715 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.723723 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:23Z","lastTransitionTime":"2026-01-30T06:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.825404 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.825429 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.825437 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.825446 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.825453 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:23Z","lastTransitionTime":"2026-01-30T06:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.927016 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.927046 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.927055 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.927066 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:23 crc kubenswrapper[4520]: I0130 06:46:23.927074 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:23Z","lastTransitionTime":"2026-01-30T06:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.028633 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.028656 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.028664 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.028674 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.028680 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:24Z","lastTransitionTime":"2026-01-30T06:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.130438 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.130463 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.130472 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.130482 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.130489 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:24Z","lastTransitionTime":"2026-01-30T06:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.232595 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.232626 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.232635 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.232649 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.232659 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:24Z","lastTransitionTime":"2026-01-30T06:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.333856 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.333889 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.333899 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.333912 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.333920 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:24Z","lastTransitionTime":"2026-01-30T06:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.435918 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.435941 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.435949 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.435958 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.435966 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:24Z","lastTransitionTime":"2026-01-30T06:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.537624 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.537644 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.537651 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.537660 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.537668 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:24Z","lastTransitionTime":"2026-01-30T06:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.638548 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.638580 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.638588 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.638596 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.638603 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:24Z","lastTransitionTime":"2026-01-30T06:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.685159 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.685173 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.685234 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:46:24 crc kubenswrapper[4520]: E0130 06:46:24.685231 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.685336 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:46:24 crc kubenswrapper[4520]: E0130 06:46:24.685410 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:46:24 crc kubenswrapper[4520]: E0130 06:46:24.685748 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:46:24 crc kubenswrapper[4520]: E0130 06:46:24.685800 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.703933 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 22:40:29.965227114 +0000 UTC Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.740339 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.740359 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.740366 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.740375 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.740382 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:24Z","lastTransitionTime":"2026-01-30T06:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.842031 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.842065 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.842073 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.842089 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.842098 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:24Z","lastTransitionTime":"2026-01-30T06:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.944312 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.944351 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.944359 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.944372 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:24 crc kubenswrapper[4520]: I0130 06:46:24.944382 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:24Z","lastTransitionTime":"2026-01-30T06:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.046567 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.046612 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.046621 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.046634 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.046644 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:25Z","lastTransitionTime":"2026-01-30T06:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.148823 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.148854 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.148862 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.148873 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.148882 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:25Z","lastTransitionTime":"2026-01-30T06:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.250528 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.250559 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.250568 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.250590 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.250599 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:25Z","lastTransitionTime":"2026-01-30T06:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.352045 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.352078 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.352090 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.352104 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.352113 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:25Z","lastTransitionTime":"2026-01-30T06:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.453678 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.453828 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.453892 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.453962 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.454026 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:25Z","lastTransitionTime":"2026-01-30T06:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.555291 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.555341 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.555354 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.555368 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.555377 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:25Z","lastTransitionTime":"2026-01-30T06:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.657443 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.657471 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.657481 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.657493 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.657501 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:25Z","lastTransitionTime":"2026-01-30T06:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.704298 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 01:33:51.275430871 +0000 UTC Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.759617 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.759647 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.759657 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.759671 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.759681 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:25Z","lastTransitionTime":"2026-01-30T06:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.861668 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.861697 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.861709 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.861719 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.861727 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:25Z","lastTransitionTime":"2026-01-30T06:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.964306 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.964334 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.964342 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.964351 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:25 crc kubenswrapper[4520]: I0130 06:46:25.964361 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:25Z","lastTransitionTime":"2026-01-30T06:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.065796 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.065824 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.065831 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.065841 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.065867 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:26Z","lastTransitionTime":"2026-01-30T06:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.167612 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.167642 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.167653 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.167662 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.167669 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:26Z","lastTransitionTime":"2026-01-30T06:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.269442 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.269472 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.269479 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.269492 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.269500 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:26Z","lastTransitionTime":"2026-01-30T06:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.371155 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.371197 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.371213 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.371229 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.371240 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:26Z","lastTransitionTime":"2026-01-30T06:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.459813 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e1a8ebe-5163-47dd-a320-a286c92971c2-metrics-certs\") pod \"network-metrics-daemon-z5rcx\" (UID: \"6e1a8ebe-5163-47dd-a320-a286c92971c2\") " pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:46:26 crc kubenswrapper[4520]: E0130 06:46:26.459915 4520 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 06:46:26 crc kubenswrapper[4520]: E0130 06:46:26.459986 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e1a8ebe-5163-47dd-a320-a286c92971c2-metrics-certs podName:6e1a8ebe-5163-47dd-a320-a286c92971c2 nodeName:}" failed. No retries permitted until 2026-01-30 06:47:30.459968437 +0000 UTC m=+164.088320628 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6e1a8ebe-5163-47dd-a320-a286c92971c2-metrics-certs") pod "network-metrics-daemon-z5rcx" (UID: "6e1a8ebe-5163-47dd-a320-a286c92971c2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.472854 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.472885 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.472894 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.472906 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.472915 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:26Z","lastTransitionTime":"2026-01-30T06:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.574026 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.574071 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.574091 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.574100 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.574107 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:26Z","lastTransitionTime":"2026-01-30T06:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.675945 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.675976 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.675985 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.675995 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.676003 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:26Z","lastTransitionTime":"2026-01-30T06:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.685299 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:46:26 crc kubenswrapper[4520]: E0130 06:46:26.685371 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.685458 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.685703 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:46:26 crc kubenswrapper[4520]: E0130 06:46:26.685791 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.685805 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:46:26 crc kubenswrapper[4520]: E0130 06:46:26.685879 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:46:26 crc kubenswrapper[4520]: E0130 06:46:26.686004 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.704602 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 02:42:02.520491105 +0000 UTC Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.708231 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-hf7k5" podStartSLOduration=78.708222621 podStartE2EDuration="1m18.708222621s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:46:26.707998039 +0000 UTC m=+100.336350220" watchObservedRunningTime="2026-01-30 06:46:26.708222621 +0000 UTC m=+100.336574802" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.727456 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podStartSLOduration=78.727446281 podStartE2EDuration="1m18.727446281s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:46:26.717263753 +0000 UTC m=+100.345615934" watchObservedRunningTime="2026-01-30 06:46:26.727446281 +0000 UTC m=+100.355798462" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.727567 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=82.727563072 podStartE2EDuration="1m22.727563072s" podCreationTimestamp="2026-01-30 06:45:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:46:26.727206401 +0000 UTC m=+100.355558581" watchObservedRunningTime="2026-01-30 06:46:26.727563072 +0000 UTC m=+100.355915252" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.770101 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tkcc8" podStartSLOduration=78.770086952 podStartE2EDuration="1m18.770086952s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:46:26.769797287 +0000 UTC m=+100.398149468" watchObservedRunningTime="2026-01-30 06:46:26.770086952 +0000 UTC m=+100.398439133" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.778204 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.778251 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.778261 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.778275 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.778283 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:26Z","lastTransitionTime":"2026-01-30T06:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.786881 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=80.786870472 podStartE2EDuration="1m20.786870472s" podCreationTimestamp="2026-01-30 06:45:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:46:26.78664141 +0000 UTC m=+100.414993592" watchObservedRunningTime="2026-01-30 06:46:26.786870472 +0000 UTC m=+100.415222643" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.808708 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=82.80869677 podStartE2EDuration="1m22.80869677s" podCreationTimestamp="2026-01-30 06:45:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:46:26.797220806 +0000 UTC m=+100.425572987" watchObservedRunningTime="2026-01-30 06:46:26.80869677 +0000 UTC m=+100.437048950" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.808981 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-kdqjc" podStartSLOduration=78.808976876 podStartE2EDuration="1m18.808976876s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:46:26.808678295 +0000 UTC m=+100.437030475" watchObservedRunningTime="2026-01-30 06:46:26.808976876 +0000 UTC m=+100.437329048" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.849319 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=52.849304114 podStartE2EDuration="52.849304114s" podCreationTimestamp="2026-01-30 06:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:46:26.849081104 +0000 UTC m=+100.477433285" watchObservedRunningTime="2026-01-30 06:46:26.849304114 +0000 UTC m=+100.477656295" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.855336 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=30.855325554 podStartE2EDuration="30.855325554s" podCreationTimestamp="2026-01-30 06:45:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:46:26.855097344 +0000 UTC m=+100.483449525" watchObservedRunningTime="2026-01-30 06:46:26.855325554 +0000 UTC m=+100.483677735" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.874819 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-mn7g2" podStartSLOduration=78.874802821 podStartE2EDuration="1m18.874802821s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:46:26.874418559 +0000 UTC m=+100.502770740" watchObservedRunningTime="2026-01-30 06:46:26.874802821 +0000 UTC m=+100.503155002" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.880209 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.880241 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.880251 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.880262 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.880271 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:26Z","lastTransitionTime":"2026-01-30T06:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.891018 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-t6th8" podStartSLOduration=78.890999396 podStartE2EDuration="1m18.890999396s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:46:26.889989786 +0000 UTC m=+100.518341967" watchObservedRunningTime="2026-01-30 06:46:26.890999396 +0000 UTC m=+100.519351647" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.982389 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.982418 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.982426 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.982437 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:26 crc kubenswrapper[4520]: I0130 06:46:26.982446 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:26Z","lastTransitionTime":"2026-01-30T06:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.083826 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.083852 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.083861 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.083872 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.083881 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:27Z","lastTransitionTime":"2026-01-30T06:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.185482 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.185562 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.185572 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.185591 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.185599 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:27Z","lastTransitionTime":"2026-01-30T06:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.287261 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.287308 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.287317 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.287341 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.287348 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:27Z","lastTransitionTime":"2026-01-30T06:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.388694 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.388720 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.388729 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.388739 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.388747 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:27Z","lastTransitionTime":"2026-01-30T06:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.490534 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.490554 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.490562 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.490572 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.490578 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:27Z","lastTransitionTime":"2026-01-30T06:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.592674 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.592697 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.592706 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.592717 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.592726 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:27Z","lastTransitionTime":"2026-01-30T06:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.694904 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.695003 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.695067 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.695123 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.695175 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:27Z","lastTransitionTime":"2026-01-30T06:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.705255 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 05:54:08.847808182 +0000 UTC Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.797001 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.797034 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.797044 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.797056 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.797065 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:27Z","lastTransitionTime":"2026-01-30T06:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.898321 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.898346 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.898355 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.898367 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:27 crc kubenswrapper[4520]: I0130 06:46:27.898375 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:27Z","lastTransitionTime":"2026-01-30T06:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.000022 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.000049 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.000058 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.000069 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.000078 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:28Z","lastTransitionTime":"2026-01-30T06:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.102171 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.102196 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.102204 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.102213 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.102220 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:28Z","lastTransitionTime":"2026-01-30T06:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.204132 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.204161 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.204169 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.204180 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.204187 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:28Z","lastTransitionTime":"2026-01-30T06:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.306144 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.306184 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.306193 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.306208 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.306218 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:28Z","lastTransitionTime":"2026-01-30T06:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.407726 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.407846 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.407915 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.407971 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.408032 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:28Z","lastTransitionTime":"2026-01-30T06:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.509193 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.509221 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.509234 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.509246 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.509254 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:28Z","lastTransitionTime":"2026-01-30T06:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.611163 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.611193 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.611202 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.611212 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.611221 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:28Z","lastTransitionTime":"2026-01-30T06:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.685268 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.685293 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:46:28 crc kubenswrapper[4520]: E0130 06:46:28.685424 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.685450 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.685437 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:46:28 crc kubenswrapper[4520]: E0130 06:46:28.685552 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:46:28 crc kubenswrapper[4520]: E0130 06:46:28.685600 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:46:28 crc kubenswrapper[4520]: E0130 06:46:28.685645 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.686018 4520 scope.go:117] "RemoveContainer" containerID="6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e" Jan 30 06:46:28 crc kubenswrapper[4520]: E0130 06:46:28.686131 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6tm5s_openshift-ovn-kubernetes(705f09bd-e1b6-47fd-83db-189fbe9a7b95)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.705882 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 00:46:34.576373143 +0000 UTC Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.712978 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.713012 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.713023 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.713034 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.713042 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:28Z","lastTransitionTime":"2026-01-30T06:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.815086 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.815110 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.815117 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.815126 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.815135 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:28Z","lastTransitionTime":"2026-01-30T06:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.916909 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.916954 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.916967 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.916983 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:28 crc kubenswrapper[4520]: I0130 06:46:28.916996 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:28Z","lastTransitionTime":"2026-01-30T06:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.018860 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.018893 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.018900 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.018912 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.018923 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:29Z","lastTransitionTime":"2026-01-30T06:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.120686 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.120713 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.120721 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.120731 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.120739 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:29Z","lastTransitionTime":"2026-01-30T06:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.222443 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.222470 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.222479 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.222488 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.222494 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:29Z","lastTransitionTime":"2026-01-30T06:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.324688 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.324731 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.324741 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.324751 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.324765 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:29Z","lastTransitionTime":"2026-01-30T06:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.359759 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.359787 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.359796 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.359807 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.359815 4520 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T06:46:29Z","lastTransitionTime":"2026-01-30T06:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.387312 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-ptmzg"] Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.387641 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ptmzg" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.389567 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.389786 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.389908 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.391604 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.481880 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9229502b-27a4-45d1-be4a-fa8cd1720acc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-ptmzg\" (UID: \"9229502b-27a4-45d1-be4a-fa8cd1720acc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ptmzg" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.481909 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9229502b-27a4-45d1-be4a-fa8cd1720acc-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-ptmzg\" (UID: \"9229502b-27a4-45d1-be4a-fa8cd1720acc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ptmzg" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.481944 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9229502b-27a4-45d1-be4a-fa8cd1720acc-service-ca\") pod \"cluster-version-operator-5c965bbfc6-ptmzg\" (UID: \"9229502b-27a4-45d1-be4a-fa8cd1720acc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ptmzg" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.481971 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9229502b-27a4-45d1-be4a-fa8cd1720acc-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-ptmzg\" (UID: \"9229502b-27a4-45d1-be4a-fa8cd1720acc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ptmzg" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.482077 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9229502b-27a4-45d1-be4a-fa8cd1720acc-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-ptmzg\" (UID: \"9229502b-27a4-45d1-be4a-fa8cd1720acc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ptmzg" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.583092 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9229502b-27a4-45d1-be4a-fa8cd1720acc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-ptmzg\" (UID: \"9229502b-27a4-45d1-be4a-fa8cd1720acc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ptmzg" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.583120 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9229502b-27a4-45d1-be4a-fa8cd1720acc-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-ptmzg\" (UID: \"9229502b-27a4-45d1-be4a-fa8cd1720acc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ptmzg" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.583137 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9229502b-27a4-45d1-be4a-fa8cd1720acc-service-ca\") pod \"cluster-version-operator-5c965bbfc6-ptmzg\" (UID: \"9229502b-27a4-45d1-be4a-fa8cd1720acc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ptmzg" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.583159 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9229502b-27a4-45d1-be4a-fa8cd1720acc-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-ptmzg\" (UID: \"9229502b-27a4-45d1-be4a-fa8cd1720acc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ptmzg" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.583190 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9229502b-27a4-45d1-be4a-fa8cd1720acc-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-ptmzg\" (UID: \"9229502b-27a4-45d1-be4a-fa8cd1720acc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ptmzg" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.583194 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9229502b-27a4-45d1-be4a-fa8cd1720acc-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-ptmzg\" (UID: \"9229502b-27a4-45d1-be4a-fa8cd1720acc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ptmzg" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.583411 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9229502b-27a4-45d1-be4a-fa8cd1720acc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-ptmzg\" (UID: \"9229502b-27a4-45d1-be4a-fa8cd1720acc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ptmzg" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.583865 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9229502b-27a4-45d1-be4a-fa8cd1720acc-service-ca\") pod \"cluster-version-operator-5c965bbfc6-ptmzg\" (UID: \"9229502b-27a4-45d1-be4a-fa8cd1720acc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ptmzg" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.587553 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9229502b-27a4-45d1-be4a-fa8cd1720acc-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-ptmzg\" (UID: \"9229502b-27a4-45d1-be4a-fa8cd1720acc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ptmzg" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.595312 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9229502b-27a4-45d1-be4a-fa8cd1720acc-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-ptmzg\" (UID: \"9229502b-27a4-45d1-be4a-fa8cd1720acc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ptmzg" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.698015 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ptmzg" Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.706764 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 18:35:48.570320897 +0000 UTC Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.706817 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 30 06:46:29 crc kubenswrapper[4520]: I0130 06:46:29.712623 4520 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 06:46:30 crc kubenswrapper[4520]: I0130 06:46:30.086713 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ptmzg" event={"ID":"9229502b-27a4-45d1-be4a-fa8cd1720acc","Type":"ContainerStarted","Data":"a42c0e98df6c43818250279e364124a88abc122cc02c72da8a7c5cf8538e49df"} Jan 30 06:46:30 crc kubenswrapper[4520]: I0130 06:46:30.086752 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ptmzg" event={"ID":"9229502b-27a4-45d1-be4a-fa8cd1720acc","Type":"ContainerStarted","Data":"37225478e03fa765c589033286c9776065160e60ed8988c2e3050cdef049a0e5"} Jan 30 06:46:30 crc kubenswrapper[4520]: I0130 06:46:30.684615 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:46:30 crc kubenswrapper[4520]: I0130 06:46:30.684686 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:46:30 crc kubenswrapper[4520]: E0130 06:46:30.684702 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:46:30 crc kubenswrapper[4520]: I0130 06:46:30.684621 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:46:30 crc kubenswrapper[4520]: E0130 06:46:30.684803 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:46:30 crc kubenswrapper[4520]: I0130 06:46:30.684811 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:46:30 crc kubenswrapper[4520]: E0130 06:46:30.684840 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:46:30 crc kubenswrapper[4520]: E0130 06:46:30.684890 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:46:32 crc kubenswrapper[4520]: I0130 06:46:32.684628 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:46:32 crc kubenswrapper[4520]: I0130 06:46:32.684685 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:46:32 crc kubenswrapper[4520]: E0130 06:46:32.684725 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:46:32 crc kubenswrapper[4520]: I0130 06:46:32.684770 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:46:32 crc kubenswrapper[4520]: I0130 06:46:32.684899 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:46:32 crc kubenswrapper[4520]: E0130 06:46:32.684894 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:46:32 crc kubenswrapper[4520]: E0130 06:46:32.685001 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:46:32 crc kubenswrapper[4520]: E0130 06:46:32.685032 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:46:34 crc kubenswrapper[4520]: I0130 06:46:34.684870 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:46:34 crc kubenswrapper[4520]: I0130 06:46:34.684949 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:46:34 crc kubenswrapper[4520]: I0130 06:46:34.685431 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:46:34 crc kubenswrapper[4520]: E0130 06:46:34.685335 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:46:34 crc kubenswrapper[4520]: E0130 06:46:34.685538 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:46:34 crc kubenswrapper[4520]: E0130 06:46:34.685445 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:46:34 crc kubenswrapper[4520]: I0130 06:46:34.685067 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:46:34 crc kubenswrapper[4520]: E0130 06:46:34.685611 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:46:36 crc kubenswrapper[4520]: I0130 06:46:36.684720 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:46:36 crc kubenswrapper[4520]: I0130 06:46:36.684740 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:46:36 crc kubenswrapper[4520]: I0130 06:46:36.684815 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:46:36 crc kubenswrapper[4520]: E0130 06:46:36.684914 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:46:36 crc kubenswrapper[4520]: I0130 06:46:36.684959 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:46:36 crc kubenswrapper[4520]: E0130 06:46:36.685845 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:46:36 crc kubenswrapper[4520]: E0130 06:46:36.685964 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:46:36 crc kubenswrapper[4520]: E0130 06:46:36.686017 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:46:38 crc kubenswrapper[4520]: I0130 06:46:38.685264 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:46:38 crc kubenswrapper[4520]: E0130 06:46:38.685343 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:46:38 crc kubenswrapper[4520]: I0130 06:46:38.685272 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:46:38 crc kubenswrapper[4520]: E0130 06:46:38.685491 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:46:38 crc kubenswrapper[4520]: I0130 06:46:38.685605 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:46:38 crc kubenswrapper[4520]: I0130 06:46:38.685672 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:46:38 crc kubenswrapper[4520]: E0130 06:46:38.685735 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:46:38 crc kubenswrapper[4520]: E0130 06:46:38.685831 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:46:40 crc kubenswrapper[4520]: I0130 06:46:40.684910 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:46:40 crc kubenswrapper[4520]: I0130 06:46:40.684971 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:46:40 crc kubenswrapper[4520]: I0130 06:46:40.685016 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:46:40 crc kubenswrapper[4520]: I0130 06:46:40.685074 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:46:40 crc kubenswrapper[4520]: E0130 06:46:40.685222 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:46:40 crc kubenswrapper[4520]: E0130 06:46:40.685296 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:46:40 crc kubenswrapper[4520]: E0130 06:46:40.685384 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:46:40 crc kubenswrapper[4520]: E0130 06:46:40.685715 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:46:42 crc kubenswrapper[4520]: I0130 06:46:42.110352 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mn7g2_dfdf507d-4d3e-40ac-a9dc-c39c411f4c26/kube-multus/1.log" Jan 30 06:46:42 crc kubenswrapper[4520]: I0130 06:46:42.110918 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mn7g2_dfdf507d-4d3e-40ac-a9dc-c39c411f4c26/kube-multus/0.log" Jan 30 06:46:42 crc kubenswrapper[4520]: I0130 06:46:42.110944 4520 generic.go:334] "Generic (PLEG): container finished" podID="dfdf507d-4d3e-40ac-a9dc-c39c411f4c26" containerID="d835f1d19bf2442d881e665a0be837f0cd4e387cc45269e26a528de8b113de21" exitCode=1 Jan 30 06:46:42 crc kubenswrapper[4520]: I0130 06:46:42.110968 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mn7g2" event={"ID":"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26","Type":"ContainerDied","Data":"d835f1d19bf2442d881e665a0be837f0cd4e387cc45269e26a528de8b113de21"} Jan 30 06:46:42 crc kubenswrapper[4520]: I0130 06:46:42.110994 4520 scope.go:117] "RemoveContainer" containerID="fea04c4b8676685ceb7079093d920b8930012b5e9647baf46dbeb2d09e5f9545" Jan 30 06:46:42 crc kubenswrapper[4520]: I0130 06:46:42.111260 4520 scope.go:117] "RemoveContainer" containerID="d835f1d19bf2442d881e665a0be837f0cd4e387cc45269e26a528de8b113de21" Jan 30 06:46:42 crc kubenswrapper[4520]: E0130 06:46:42.111406 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-mn7g2_openshift-multus(dfdf507d-4d3e-40ac-a9dc-c39c411f4c26)\"" pod="openshift-multus/multus-mn7g2" podUID="dfdf507d-4d3e-40ac-a9dc-c39c411f4c26" Jan 30 06:46:42 crc kubenswrapper[4520]: I0130 06:46:42.124239 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ptmzg" podStartSLOduration=94.124230845 podStartE2EDuration="1m34.124230845s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:46:30.096178657 +0000 UTC m=+103.724530839" watchObservedRunningTime="2026-01-30 06:46:42.124230845 +0000 UTC m=+115.752583015" Jan 30 06:46:42 crc kubenswrapper[4520]: I0130 06:46:42.684974 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:46:42 crc kubenswrapper[4520]: E0130 06:46:42.685104 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:46:42 crc kubenswrapper[4520]: I0130 06:46:42.685149 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:46:42 crc kubenswrapper[4520]: I0130 06:46:42.685187 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:46:42 crc kubenswrapper[4520]: I0130 06:46:42.685232 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:46:42 crc kubenswrapper[4520]: E0130 06:46:42.685300 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:46:42 crc kubenswrapper[4520]: E0130 06:46:42.685338 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:46:42 crc kubenswrapper[4520]: E0130 06:46:42.685645 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:46:42 crc kubenswrapper[4520]: I0130 06:46:42.685821 4520 scope.go:117] "RemoveContainer" containerID="6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e" Jan 30 06:46:43 crc kubenswrapper[4520]: I0130 06:46:43.114966 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mn7g2_dfdf507d-4d3e-40ac-a9dc-c39c411f4c26/kube-multus/1.log" Jan 30 06:46:43 crc kubenswrapper[4520]: I0130 06:46:43.116629 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tm5s_705f09bd-e1b6-47fd-83db-189fbe9a7b95/ovnkube-controller/3.log" Jan 30 06:46:43 crc kubenswrapper[4520]: I0130 06:46:43.118756 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" event={"ID":"705f09bd-e1b6-47fd-83db-189fbe9a7b95","Type":"ContainerStarted","Data":"64d3e2184b58bf7bcb6224a1a435de5863b26e0398998735c1963be36e6651ae"} Jan 30 06:46:43 crc kubenswrapper[4520]: I0130 06:46:43.119062 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:46:43 crc kubenswrapper[4520]: I0130 06:46:43.291914 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" podStartSLOduration=95.291898165 podStartE2EDuration="1m35.291898165s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:46:43.139947973 +0000 UTC m=+116.768300155" watchObservedRunningTime="2026-01-30 06:46:43.291898165 +0000 UTC m=+116.920250337" Jan 30 06:46:43 crc kubenswrapper[4520]: I0130 06:46:43.292676 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-z5rcx"] Jan 30 06:46:43 crc kubenswrapper[4520]: I0130 06:46:43.292758 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:46:43 crc kubenswrapper[4520]: E0130 06:46:43.292830 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:46:44 crc kubenswrapper[4520]: I0130 06:46:44.684759 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:46:44 crc kubenswrapper[4520]: E0130 06:46:44.685034 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:46:44 crc kubenswrapper[4520]: I0130 06:46:44.684806 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:46:44 crc kubenswrapper[4520]: I0130 06:46:44.684760 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:46:44 crc kubenswrapper[4520]: E0130 06:46:44.685099 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:46:44 crc kubenswrapper[4520]: E0130 06:46:44.685134 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:46:45 crc kubenswrapper[4520]: I0130 06:46:45.685500 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:46:45 crc kubenswrapper[4520]: E0130 06:46:45.685610 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:46:46 crc kubenswrapper[4520]: I0130 06:46:46.685213 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:46:46 crc kubenswrapper[4520]: I0130 06:46:46.685216 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:46:46 crc kubenswrapper[4520]: I0130 06:46:46.685263 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:46:46 crc kubenswrapper[4520]: E0130 06:46:46.685426 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:46:46 crc kubenswrapper[4520]: E0130 06:46:46.686409 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:46:46 crc kubenswrapper[4520]: E0130 06:46:46.686482 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:46:46 crc kubenswrapper[4520]: E0130 06:46:46.723579 4520 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 30 06:46:46 crc kubenswrapper[4520]: E0130 06:46:46.756315 4520 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 06:46:47 crc kubenswrapper[4520]: I0130 06:46:47.685286 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:46:47 crc kubenswrapper[4520]: E0130 06:46:47.685495 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:46:48 crc kubenswrapper[4520]: I0130 06:46:48.685477 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:46:48 crc kubenswrapper[4520]: I0130 06:46:48.685527 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:46:48 crc kubenswrapper[4520]: E0130 06:46:48.685612 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:46:48 crc kubenswrapper[4520]: I0130 06:46:48.685693 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:46:48 crc kubenswrapper[4520]: E0130 06:46:48.685812 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:46:48 crc kubenswrapper[4520]: E0130 06:46:48.685937 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:46:49 crc kubenswrapper[4520]: I0130 06:46:49.684660 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:46:49 crc kubenswrapper[4520]: E0130 06:46:49.684775 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:46:50 crc kubenswrapper[4520]: I0130 06:46:50.685074 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:46:50 crc kubenswrapper[4520]: I0130 06:46:50.685113 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:46:50 crc kubenswrapper[4520]: I0130 06:46:50.685074 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:46:50 crc kubenswrapper[4520]: E0130 06:46:50.685186 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:46:50 crc kubenswrapper[4520]: E0130 06:46:50.685240 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:46:50 crc kubenswrapper[4520]: E0130 06:46:50.685292 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:46:51 crc kubenswrapper[4520]: I0130 06:46:51.685428 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:46:51 crc kubenswrapper[4520]: E0130 06:46:51.685566 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:46:51 crc kubenswrapper[4520]: E0130 06:46:51.757552 4520 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 06:46:52 crc kubenswrapper[4520]: I0130 06:46:52.685314 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:46:52 crc kubenswrapper[4520]: I0130 06:46:52.685356 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:46:52 crc kubenswrapper[4520]: E0130 06:46:52.685445 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:46:52 crc kubenswrapper[4520]: I0130 06:46:52.685504 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:46:52 crc kubenswrapper[4520]: E0130 06:46:52.685585 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:46:52 crc kubenswrapper[4520]: E0130 06:46:52.685632 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:46:53 crc kubenswrapper[4520]: I0130 06:46:53.684678 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:46:53 crc kubenswrapper[4520]: E0130 06:46:53.684789 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:46:54 crc kubenswrapper[4520]: I0130 06:46:54.684619 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:46:54 crc kubenswrapper[4520]: I0130 06:46:54.684668 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:46:54 crc kubenswrapper[4520]: I0130 06:46:54.684674 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:46:54 crc kubenswrapper[4520]: E0130 06:46:54.684767 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:46:54 crc kubenswrapper[4520]: E0130 06:46:54.684847 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:46:54 crc kubenswrapper[4520]: E0130 06:46:54.684936 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:46:55 crc kubenswrapper[4520]: I0130 06:46:55.684690 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:46:55 crc kubenswrapper[4520]: E0130 06:46:55.684796 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:46:56 crc kubenswrapper[4520]: I0130 06:46:56.685033 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:46:56 crc kubenswrapper[4520]: E0130 06:46:56.685888 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:46:56 crc kubenswrapper[4520]: I0130 06:46:56.685907 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:46:56 crc kubenswrapper[4520]: I0130 06:46:56.685941 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:46:56 crc kubenswrapper[4520]: E0130 06:46:56.685960 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:46:56 crc kubenswrapper[4520]: E0130 06:46:56.686028 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:46:56 crc kubenswrapper[4520]: I0130 06:46:56.686258 4520 scope.go:117] "RemoveContainer" containerID="d835f1d19bf2442d881e665a0be837f0cd4e387cc45269e26a528de8b113de21" Jan 30 06:46:56 crc kubenswrapper[4520]: E0130 06:46:56.758040 4520 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 06:46:57 crc kubenswrapper[4520]: I0130 06:46:57.150121 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mn7g2_dfdf507d-4d3e-40ac-a9dc-c39c411f4c26/kube-multus/1.log" Jan 30 06:46:57 crc kubenswrapper[4520]: I0130 06:46:57.150164 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mn7g2" event={"ID":"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26","Type":"ContainerStarted","Data":"62c6675ec316ce30555a257a931998d24e9ffbaca75aed0464d002d9f6c3c7cf"} Jan 30 06:46:57 crc kubenswrapper[4520]: I0130 06:46:57.685401 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:46:57 crc kubenswrapper[4520]: E0130 06:46:57.685506 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:46:58 crc kubenswrapper[4520]: I0130 06:46:58.685222 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:46:58 crc kubenswrapper[4520]: I0130 06:46:58.685274 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:46:58 crc kubenswrapper[4520]: I0130 06:46:58.685222 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:46:58 crc kubenswrapper[4520]: E0130 06:46:58.685329 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:46:58 crc kubenswrapper[4520]: E0130 06:46:58.685407 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:46:58 crc kubenswrapper[4520]: E0130 06:46:58.685496 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:46:59 crc kubenswrapper[4520]: I0130 06:46:59.685604 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:46:59 crc kubenswrapper[4520]: E0130 06:46:59.685727 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:47:00 crc kubenswrapper[4520]: I0130 06:47:00.685482 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:47:00 crc kubenswrapper[4520]: I0130 06:47:00.685497 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:47:00 crc kubenswrapper[4520]: E0130 06:47:00.685860 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 06:47:00 crc kubenswrapper[4520]: I0130 06:47:00.685580 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:47:00 crc kubenswrapper[4520]: E0130 06:47:00.685931 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 06:47:00 crc kubenswrapper[4520]: E0130 06:47:00.685762 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 06:47:01 crc kubenswrapper[4520]: I0130 06:47:01.685230 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:47:01 crc kubenswrapper[4520]: E0130 06:47:01.685552 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z5rcx" podUID="6e1a8ebe-5163-47dd-a320-a286c92971c2" Jan 30 06:47:02 crc kubenswrapper[4520]: I0130 06:47:02.106709 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:47:02 crc kubenswrapper[4520]: I0130 06:47:02.684841 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:47:02 crc kubenswrapper[4520]: I0130 06:47:02.684925 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:47:02 crc kubenswrapper[4520]: I0130 06:47:02.685174 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:47:02 crc kubenswrapper[4520]: I0130 06:47:02.686076 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 06:47:02 crc kubenswrapper[4520]: I0130 06:47:02.686477 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 06:47:02 crc kubenswrapper[4520]: I0130 06:47:02.687817 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 06:47:02 crc kubenswrapper[4520]: I0130 06:47:02.687858 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 06:47:03 crc kubenswrapper[4520]: I0130 06:47:03.685496 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:47:03 crc kubenswrapper[4520]: I0130 06:47:03.687854 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 06:47:03 crc kubenswrapper[4520]: I0130 06:47:03.688421 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.442893 4520 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.463460 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-782cc"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.463786 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.467538 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.467670 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.467765 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.467883 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.467985 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.468068 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.468171 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.468262 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.468348 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.469810 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.470438 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.470670 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.472925 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-6n75g"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.473056 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.473279 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6n75g" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.473483 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqjqj"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.473806 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqjqj" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.474371 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.474719 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.476704 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8jk9c"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.478732 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-w7xl2"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.479682 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8jk9c" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.480331 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-w7xl2" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.482549 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.483723 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-dqjws"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.484265 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-dqjws" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.487024 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ll7nf"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.487453 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.487960 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-ll7nf" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.488619 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.489397 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rck29"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.489935 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rck29" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.489963 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-lflpb"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.490264 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-lflpb" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.542716 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.547683 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.548251 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.548811 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-sks8c"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.549266 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-sks8c" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.550211 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.552696 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.552869 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.553013 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.553138 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.553333 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.554231 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.554572 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.554702 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.554858 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.554974 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.555208 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.555888 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.556012 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.556022 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.556138 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.556224 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.556233 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-s6bks"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.556321 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.556648 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-s6bks" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.556664 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.557176 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.557195 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.557309 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.557313 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.557403 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.558963 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.558992 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.559076 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.559278 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.559290 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.559450 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.559545 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.559579 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.559668 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.559695 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.559881 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.559950 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.560268 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2vpl2"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.560687 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2vpl2" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.561202 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.563948 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.570130 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bdjcm"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.570424 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bdjcm" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.577973 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.577976 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580327 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d63d73a7-c813-4983-bccf-805604f7d593-config\") pod \"route-controller-manager-6576b87f9c-pqjqj\" (UID: \"d63d73a7-c813-4983-bccf-805604f7d593\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqjqj" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580361 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580381 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580402 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f4d90ef-dfaa-4a6b-8e9f-dc4e4039da47-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rck29\" (UID: \"4f4d90ef-dfaa-4a6b-8e9f-dc4e4039da47\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rck29" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580418 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22d49062-540d-414e-b0c6-2c20d411fa71-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-dqjws\" (UID: \"22d49062-540d-414e-b0c6-2c20d411fa71\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dqjws" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580433 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjzk2\" (UniqueName: \"kubernetes.io/projected/f97d3be8-69cc-4005-aa61-9ff3f6c72287-kube-api-access-kjzk2\") pod \"machine-approver-56656f9798-6n75g\" (UID: \"f97d3be8-69cc-4005-aa61-9ff3f6c72287\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6n75g" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580447 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d63d73a7-c813-4983-bccf-805604f7d593-client-ca\") pod \"route-controller-manager-6576b87f9c-pqjqj\" (UID: \"d63d73a7-c813-4983-bccf-805604f7d593\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqjqj" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580459 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d63d73a7-c813-4983-bccf-805604f7d593-serving-cert\") pod \"route-controller-manager-6576b87f9c-pqjqj\" (UID: \"d63d73a7-c813-4983-bccf-805604f7d593\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqjqj" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580479 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b8ab10e4-5a02-445b-8788-1ed64c22c9e3-metrics-tls\") pod \"dns-operator-744455d44c-ll7nf\" (UID: \"b8ab10e4-5a02-445b-8788-1ed64c22c9e3\") " pod="openshift-dns-operator/dns-operator-744455d44c-ll7nf" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580494 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd235a24-175b-4983-980e-2630b3c5b39f-client-ca\") pod \"controller-manager-879f6c89f-8jk9c\" (UID: \"dd235a24-175b-4983-980e-2630b3c5b39f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jk9c" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580509 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580540 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580556 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/265d9231-d5db-4cdb-80b8-dfd95dffa386-audit-dir\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580570 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580586 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580602 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580615 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580632 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22d49062-540d-414e-b0c6-2c20d411fa71-service-ca-bundle\") pod \"authentication-operator-69f744f599-dqjws\" (UID: \"22d49062-540d-414e-b0c6-2c20d411fa71\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dqjws" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580648 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23b08d0a-4aa5-43be-a498-55e54d6e8c31-serving-cert\") pod \"console-operator-58897d9998-w7xl2\" (UID: \"23b08d0a-4aa5-43be-a498-55e54d6e8c31\") " pod="openshift-console-operator/console-operator-58897d9998-w7xl2" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580672 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd235a24-175b-4983-980e-2630b3c5b39f-serving-cert\") pod \"controller-manager-879f6c89f-8jk9c\" (UID: \"dd235a24-175b-4983-980e-2630b3c5b39f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jk9c" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580687 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580700 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22d49062-540d-414e-b0c6-2c20d411fa71-serving-cert\") pod \"authentication-operator-69f744f599-dqjws\" (UID: \"22d49062-540d-414e-b0c6-2c20d411fa71\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dqjws" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580717 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65vrn\" (UniqueName: \"kubernetes.io/projected/f56326ab-bf4f-43c5-8762-85cb71c93f0a-kube-api-access-65vrn\") pod \"downloads-7954f5f757-lflpb\" (UID: \"f56326ab-bf4f-43c5-8762-85cb71c93f0a\") " pod="openshift-console/downloads-7954f5f757-lflpb" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580738 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcbpf\" (UniqueName: \"kubernetes.io/projected/4a3be9f1-bd40-4667-bdd7-2cf23292fab5-kube-api-access-dcbpf\") pod \"openshift-config-operator-7777fb866f-rn9s4\" (UID: \"4a3be9f1-bd40-4667-bdd7-2cf23292fab5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580751 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwpmd\" (UniqueName: \"kubernetes.io/projected/4f4d90ef-dfaa-4a6b-8e9f-dc4e4039da47-kube-api-access-nwpmd\") pod \"openshift-controller-manager-operator-756b6f6bc6-rck29\" (UID: \"4f4d90ef-dfaa-4a6b-8e9f-dc4e4039da47\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rck29" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580781 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgvmp\" (UniqueName: \"kubernetes.io/projected/265d9231-d5db-4cdb-80b8-dfd95dffa386-kube-api-access-bgvmp\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580804 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd235a24-175b-4983-980e-2630b3c5b39f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8jk9c\" (UID: \"dd235a24-175b-4983-980e-2630b3c5b39f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jk9c" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580818 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7pt2\" (UniqueName: \"kubernetes.io/projected/22d49062-540d-414e-b0c6-2c20d411fa71-kube-api-access-j7pt2\") pod \"authentication-operator-69f744f599-dqjws\" (UID: \"22d49062-540d-414e-b0c6-2c20d411fa71\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dqjws" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580833 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f97d3be8-69cc-4005-aa61-9ff3f6c72287-config\") pod \"machine-approver-56656f9798-6n75g\" (UID: \"f97d3be8-69cc-4005-aa61-9ff3f6c72287\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6n75g" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580847 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f4d90ef-dfaa-4a6b-8e9f-dc4e4039da47-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rck29\" (UID: \"4f4d90ef-dfaa-4a6b-8e9f-dc4e4039da47\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rck29" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580860 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23b08d0a-4aa5-43be-a498-55e54d6e8c31-trusted-ca\") pod \"console-operator-58897d9998-w7xl2\" (UID: \"23b08d0a-4aa5-43be-a498-55e54d6e8c31\") " pod="openshift-console-operator/console-operator-58897d9998-w7xl2" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580875 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23b08d0a-4aa5-43be-a498-55e54d6e8c31-config\") pod \"console-operator-58897d9998-w7xl2\" (UID: \"23b08d0a-4aa5-43be-a498-55e54d6e8c31\") " pod="openshift-console-operator/console-operator-58897d9998-w7xl2" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580890 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f97d3be8-69cc-4005-aa61-9ff3f6c72287-auth-proxy-config\") pod \"machine-approver-56656f9798-6n75g\" (UID: \"f97d3be8-69cc-4005-aa61-9ff3f6c72287\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6n75g" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580904 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksvln\" (UniqueName: \"kubernetes.io/projected/23b08d0a-4aa5-43be-a498-55e54d6e8c31-kube-api-access-ksvln\") pod \"console-operator-58897d9998-w7xl2\" (UID: \"23b08d0a-4aa5-43be-a498-55e54d6e8c31\") " pod="openshift-console-operator/console-operator-58897d9998-w7xl2" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580934 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6nc4\" (UniqueName: \"kubernetes.io/projected/d63d73a7-c813-4983-bccf-805604f7d593-kube-api-access-d6nc4\") pod \"route-controller-manager-6576b87f9c-pqjqj\" (UID: \"d63d73a7-c813-4983-bccf-805604f7d593\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqjqj" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580950 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd235a24-175b-4983-980e-2630b3c5b39f-config\") pod \"controller-manager-879f6c89f-8jk9c\" (UID: \"dd235a24-175b-4983-980e-2630b3c5b39f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jk9c" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580964 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22d49062-540d-414e-b0c6-2c20d411fa71-config\") pod \"authentication-operator-69f744f599-dqjws\" (UID: \"22d49062-540d-414e-b0c6-2c20d411fa71\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dqjws" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580982 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs4tc\" (UniqueName: \"kubernetes.io/projected/dd235a24-175b-4983-980e-2630b3c5b39f-kube-api-access-cs4tc\") pod \"controller-manager-879f6c89f-8jk9c\" (UID: \"dd235a24-175b-4983-980e-2630b3c5b39f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jk9c" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.580997 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.581013 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76zgw\" (UniqueName: \"kubernetes.io/projected/b8ab10e4-5a02-445b-8788-1ed64c22c9e3-kube-api-access-76zgw\") pod \"dns-operator-744455d44c-ll7nf\" (UID: \"b8ab10e4-5a02-445b-8788-1ed64c22c9e3\") " pod="openshift-dns-operator/dns-operator-744455d44c-ll7nf" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.581026 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a3be9f1-bd40-4667-bdd7-2cf23292fab5-serving-cert\") pod \"openshift-config-operator-7777fb866f-rn9s4\" (UID: \"4a3be9f1-bd40-4667-bdd7-2cf23292fab5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.581039 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4a3be9f1-bd40-4667-bdd7-2cf23292fab5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-rn9s4\" (UID: \"4a3be9f1-bd40-4667-bdd7-2cf23292fab5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.581061 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/265d9231-d5db-4cdb-80b8-dfd95dffa386-audit-policies\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.581077 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f97d3be8-69cc-4005-aa61-9ff3f6c72287-machine-approver-tls\") pod \"machine-approver-56656f9798-6n75g\" (UID: \"f97d3be8-69cc-4005-aa61-9ff3f6c72287\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6n75g" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.581090 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.585764 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.587194 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-w62kb"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.587568 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-w62kb" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.589609 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.593525 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bhzlz"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.593851 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bhzlz" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.594887 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.595042 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.595201 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.595301 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.595417 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.595553 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.595669 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.597161 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-nkbdc"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.601278 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.604905 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.605459 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.605508 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.605746 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.605907 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.606158 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.606348 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.607181 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.607639 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.607815 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.607954 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.608113 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.608240 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.608478 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.610552 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.610565 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.611930 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hzv4j"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.612617 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-nkbdc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.612676 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.613333 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qv6cz"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.621466 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-54cnn"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.621842 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.621857 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qv6cz" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.621950 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.622031 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.622085 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.622133 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.622193 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.622234 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.622309 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.622460 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hjhkn"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.622987 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hjhkn" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.623293 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.631532 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6sjr4"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.632010 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ljplq"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.632124 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.632303 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ljplq" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.632530 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6sjr4" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.632603 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.632960 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4pxnp"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.633028 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.633114 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.633212 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.633216 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.633295 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.633263 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.633452 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4pxnp" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.633681 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wvf85"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.633986 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wvf85" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.635141 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jhnpn"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.635302 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.635784 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jhnpn" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.636175 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.637289 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-fbccj"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.638496 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fbccj" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.638997 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.639824 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.643196 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8pt4x"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.643730 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8pt4x" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.644169 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-85d5l"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.644977 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-85d5l" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.647215 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qln6b"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.651591 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nc9qp"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.654591 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.655216 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qln6b" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.655237 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qgkcs"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.655645 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nc9qp" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.656071 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qgkcs" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.659315 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.662728 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fd76j"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.663257 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqjqj"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.663369 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fd76j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.664725 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.666417 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.671165 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4m8ns"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.671908 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-4m8ns" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.678895 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjb69"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.679336 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495925-q62ms"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.679773 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-z67kf"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.680144 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-z67kf" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.680149 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-782cc"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.680383 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjb69" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.680587 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495925-q62ms" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.681017 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.681592 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6nc4\" (UniqueName: \"kubernetes.io/projected/d63d73a7-c813-4983-bccf-805604f7d593-kube-api-access-d6nc4\") pod \"route-controller-manager-6576b87f9c-pqjqj\" (UID: \"d63d73a7-c813-4983-bccf-805604f7d593\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqjqj" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.681620 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d0267c2e-5b07-4578-bc73-2504b5300313-image-import-ca\") pod \"apiserver-76f77b778f-hzv4j\" (UID: \"d0267c2e-5b07-4578-bc73-2504b5300313\") " pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.681643 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj2dg\" (UniqueName: \"kubernetes.io/projected/4d23e44d-fbe6-40d1-8d6e-bf19cc751be8-kube-api-access-vj2dg\") pod \"cluster-image-registry-operator-dc59b4c8b-bdjcm\" (UID: \"4d23e44d-fbe6-40d1-8d6e-bf19cc751be8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bdjcm" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.681662 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-console-oauth-config\") pod \"console-f9d7485db-nkbdc\" (UID: \"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b\") " pod="openshift-console/console-f9d7485db-nkbdc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.681675 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0267c2e-5b07-4578-bc73-2504b5300313-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hzv4j\" (UID: \"d0267c2e-5b07-4578-bc73-2504b5300313\") " pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.681693 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd235a24-175b-4983-980e-2630b3c5b39f-config\") pod \"controller-manager-879f6c89f-8jk9c\" (UID: \"dd235a24-175b-4983-980e-2630b3c5b39f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jk9c" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.681710 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22d49062-540d-414e-b0c6-2c20d411fa71-config\") pod \"authentication-operator-69f744f599-dqjws\" (UID: \"22d49062-540d-414e-b0c6-2c20d411fa71\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dqjws" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.681725 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs4tc\" (UniqueName: \"kubernetes.io/projected/dd235a24-175b-4983-980e-2630b3c5b39f-kube-api-access-cs4tc\") pod \"controller-manager-879f6c89f-8jk9c\" (UID: \"dd235a24-175b-4983-980e-2630b3c5b39f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jk9c" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.681740 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.681756 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/63191221-7520-4517-aeed-6d3896c2cad1-encryption-config\") pod \"apiserver-7bbb656c7d-2vpl2\" (UID: \"63191221-7520-4517-aeed-6d3896c2cad1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2vpl2" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.681771 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0267c2e-5b07-4578-bc73-2504b5300313-serving-cert\") pod \"apiserver-76f77b778f-hzv4j\" (UID: \"d0267c2e-5b07-4578-bc73-2504b5300313\") " pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.681789 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76zgw\" (UniqueName: \"kubernetes.io/projected/b8ab10e4-5a02-445b-8788-1ed64c22c9e3-kube-api-access-76zgw\") pod \"dns-operator-744455d44c-ll7nf\" (UID: \"b8ab10e4-5a02-445b-8788-1ed64c22c9e3\") " pod="openshift-dns-operator/dns-operator-744455d44c-ll7nf" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.681852 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a3be9f1-bd40-4667-bdd7-2cf23292fab5-serving-cert\") pod \"openshift-config-operator-7777fb866f-rn9s4\" (UID: \"4a3be9f1-bd40-4667-bdd7-2cf23292fab5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.681868 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1b628dc-8ac5-4463-bcdd-b573fa6c1e80-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bhzlz\" (UID: \"b1b628dc-8ac5-4463-bcdd-b573fa6c1e80\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bhzlz" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.681885 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4a3be9f1-bd40-4667-bdd7-2cf23292fab5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-rn9s4\" (UID: \"4a3be9f1-bd40-4667-bdd7-2cf23292fab5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.681899 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llwpp\" (UniqueName: \"kubernetes.io/projected/63191221-7520-4517-aeed-6d3896c2cad1-kube-api-access-llwpp\") pod \"apiserver-7bbb656c7d-2vpl2\" (UID: \"63191221-7520-4517-aeed-6d3896c2cad1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2vpl2" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.681914 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-oauth-serving-cert\") pod \"console-f9d7485db-nkbdc\" (UID: \"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b\") " pod="openshift-console/console-f9d7485db-nkbdc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.681937 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/265d9231-d5db-4cdb-80b8-dfd95dffa386-audit-policies\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.681950 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/63191221-7520-4517-aeed-6d3896c2cad1-audit-dir\") pod \"apiserver-7bbb656c7d-2vpl2\" (UID: \"63191221-7520-4517-aeed-6d3896c2cad1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2vpl2" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.681965 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f97d3be8-69cc-4005-aa61-9ff3f6c72287-machine-approver-tls\") pod \"machine-approver-56656f9798-6n75g\" (UID: \"f97d3be8-69cc-4005-aa61-9ff3f6c72287\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6n75g" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.681980 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.681996 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf96x\" (UniqueName: \"kubernetes.io/projected/2b9d0f20-53d1-4142-b961-55d553553aed-kube-api-access-kf96x\") pod \"machine-api-operator-5694c8668f-s6bks\" (UID: \"2b9d0f20-53d1-4142-b961-55d553553aed\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s6bks" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682010 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63191221-7520-4517-aeed-6d3896c2cad1-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2vpl2\" (UID: \"63191221-7520-4517-aeed-6d3896c2cad1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2vpl2" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682023 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0267c2e-5b07-4578-bc73-2504b5300313-config\") pod \"apiserver-76f77b778f-hzv4j\" (UID: \"d0267c2e-5b07-4578-bc73-2504b5300313\") " pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682037 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63191221-7520-4517-aeed-6d3896c2cad1-serving-cert\") pod \"apiserver-7bbb656c7d-2vpl2\" (UID: \"63191221-7520-4517-aeed-6d3896c2cad1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2vpl2" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682054 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d63d73a7-c813-4983-bccf-805604f7d593-config\") pod \"route-controller-manager-6576b87f9c-pqjqj\" (UID: \"d63d73a7-c813-4983-bccf-805604f7d593\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqjqj" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682069 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682083 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f4d90ef-dfaa-4a6b-8e9f-dc4e4039da47-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rck29\" (UID: \"4f4d90ef-dfaa-4a6b-8e9f-dc4e4039da47\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rck29" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682099 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22d49062-540d-414e-b0c6-2c20d411fa71-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-dqjws\" (UID: \"22d49062-540d-414e-b0c6-2c20d411fa71\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dqjws" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682113 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682132 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p8fg\" (UniqueName: \"kubernetes.io/projected/b0dc81d4-052e-46df-a17e-4461ccf8a64d-kube-api-access-9p8fg\") pod \"cluster-samples-operator-665b6dd947-sks8c\" (UID: \"b0dc81d4-052e-46df-a17e-4461ccf8a64d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-sks8c" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682147 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mwtz\" (UniqueName: \"kubernetes.io/projected/a7374ef9-1396-4293-b711-fb07eaa512d0-kube-api-access-5mwtz\") pod \"etcd-operator-b45778765-w62kb\" (UID: \"a7374ef9-1396-4293-b711-fb07eaa512d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w62kb" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682160 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/63191221-7520-4517-aeed-6d3896c2cad1-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2vpl2\" (UID: \"63191221-7520-4517-aeed-6d3896c2cad1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2vpl2" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682378 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjzk2\" (UniqueName: \"kubernetes.io/projected/f97d3be8-69cc-4005-aa61-9ff3f6c72287-kube-api-access-kjzk2\") pod \"machine-approver-56656f9798-6n75g\" (UID: \"f97d3be8-69cc-4005-aa61-9ff3f6c72287\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6n75g" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682392 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d63d73a7-c813-4983-bccf-805604f7d593-client-ca\") pod \"route-controller-manager-6576b87f9c-pqjqj\" (UID: \"d63d73a7-c813-4983-bccf-805604f7d593\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqjqj" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682411 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7374ef9-1396-4293-b711-fb07eaa512d0-serving-cert\") pod \"etcd-operator-b45778765-w62kb\" (UID: \"a7374ef9-1396-4293-b711-fb07eaa512d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w62kb" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682427 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b8ab10e4-5a02-445b-8788-1ed64c22c9e3-metrics-tls\") pod \"dns-operator-744455d44c-ll7nf\" (UID: \"b8ab10e4-5a02-445b-8788-1ed64c22c9e3\") " pod="openshift-dns-operator/dns-operator-744455d44c-ll7nf" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682443 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d63d73a7-c813-4983-bccf-805604f7d593-serving-cert\") pod \"route-controller-manager-6576b87f9c-pqjqj\" (UID: \"d63d73a7-c813-4983-bccf-805604f7d593\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqjqj" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682459 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a7374ef9-1396-4293-b711-fb07eaa512d0-etcd-ca\") pod \"etcd-operator-b45778765-w62kb\" (UID: \"a7374ef9-1396-4293-b711-fb07eaa512d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w62kb" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682473 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/63191221-7520-4517-aeed-6d3896c2cad1-audit-policies\") pod \"apiserver-7bbb656c7d-2vpl2\" (UID: \"63191221-7520-4517-aeed-6d3896c2cad1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2vpl2" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682489 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hwqj\" (UniqueName: \"kubernetes.io/projected/b1b628dc-8ac5-4463-bcdd-b573fa6c1e80-kube-api-access-5hwqj\") pod \"openshift-apiserver-operator-796bbdcf4f-bhzlz\" (UID: \"b1b628dc-8ac5-4463-bcdd-b573fa6c1e80\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bhzlz" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682502 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d0267c2e-5b07-4578-bc73-2504b5300313-encryption-config\") pod \"apiserver-76f77b778f-hzv4j\" (UID: \"d0267c2e-5b07-4578-bc73-2504b5300313\") " pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682537 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682552 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-console-config\") pod \"console-f9d7485db-nkbdc\" (UID: \"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b\") " pod="openshift-console/console-f9d7485db-nkbdc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682569 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd235a24-175b-4983-980e-2630b3c5b39f-client-ca\") pod \"controller-manager-879f6c89f-8jk9c\" (UID: \"dd235a24-175b-4983-980e-2630b3c5b39f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jk9c" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682586 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682601 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/265d9231-d5db-4cdb-80b8-dfd95dffa386-audit-dir\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682617 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682637 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682653 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682668 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682683 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d0267c2e-5b07-4578-bc73-2504b5300313-etcd-client\") pod \"apiserver-76f77b778f-hzv4j\" (UID: \"d0267c2e-5b07-4578-bc73-2504b5300313\") " pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682699 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23b08d0a-4aa5-43be-a498-55e54d6e8c31-serving-cert\") pod \"console-operator-58897d9998-w7xl2\" (UID: \"23b08d0a-4aa5-43be-a498-55e54d6e8c31\") " pod="openshift-console-operator/console-operator-58897d9998-w7xl2" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682713 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7374ef9-1396-4293-b711-fb07eaa512d0-config\") pod \"etcd-operator-b45778765-w62kb\" (UID: \"a7374ef9-1396-4293-b711-fb07eaa512d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w62kb" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682727 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vddvg\" (UniqueName: \"kubernetes.io/projected/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-kube-api-access-vddvg\") pod \"console-f9d7485db-nkbdc\" (UID: \"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b\") " pod="openshift-console/console-f9d7485db-nkbdc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682742 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22d49062-540d-414e-b0c6-2c20d411fa71-service-ca-bundle\") pod \"authentication-operator-69f744f599-dqjws\" (UID: \"22d49062-540d-414e-b0c6-2c20d411fa71\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dqjws" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682757 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd235a24-175b-4983-980e-2630b3c5b39f-serving-cert\") pod \"controller-manager-879f6c89f-8jk9c\" (UID: \"dd235a24-175b-4983-980e-2630b3c5b39f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jk9c" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682772 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-trusted-ca-bundle\") pod \"console-f9d7485db-nkbdc\" (UID: \"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b\") " pod="openshift-console/console-f9d7485db-nkbdc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682808 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682823 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22d49062-540d-414e-b0c6-2c20d411fa71-serving-cert\") pod \"authentication-operator-69f744f599-dqjws\" (UID: \"22d49062-540d-414e-b0c6-2c20d411fa71\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dqjws" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682838 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b0dc81d4-052e-46df-a17e-4461ccf8a64d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-sks8c\" (UID: \"b0dc81d4-052e-46df-a17e-4461ccf8a64d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-sks8c" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682852 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d23e44d-fbe6-40d1-8d6e-bf19cc751be8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-bdjcm\" (UID: \"4d23e44d-fbe6-40d1-8d6e-bf19cc751be8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bdjcm" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682880 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-console-serving-cert\") pod \"console-f9d7485db-nkbdc\" (UID: \"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b\") " pod="openshift-console/console-f9d7485db-nkbdc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682894 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d0267c2e-5b07-4578-bc73-2504b5300313-node-pullsecrets\") pod \"apiserver-76f77b778f-hzv4j\" (UID: \"d0267c2e-5b07-4578-bc73-2504b5300313\") " pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682909 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d0267c2e-5b07-4578-bc73-2504b5300313-audit\") pod \"apiserver-76f77b778f-hzv4j\" (UID: \"d0267c2e-5b07-4578-bc73-2504b5300313\") " pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682926 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d0267c2e-5b07-4578-bc73-2504b5300313-etcd-serving-ca\") pod \"apiserver-76f77b778f-hzv4j\" (UID: \"d0267c2e-5b07-4578-bc73-2504b5300313\") " pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682942 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65vrn\" (UniqueName: \"kubernetes.io/projected/f56326ab-bf4f-43c5-8762-85cb71c93f0a-kube-api-access-65vrn\") pod \"downloads-7954f5f757-lflpb\" (UID: \"f56326ab-bf4f-43c5-8762-85cb71c93f0a\") " pod="openshift-console/downloads-7954f5f757-lflpb" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682957 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d23e44d-fbe6-40d1-8d6e-bf19cc751be8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-bdjcm\" (UID: \"4d23e44d-fbe6-40d1-8d6e-bf19cc751be8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bdjcm" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682981 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcbpf\" (UniqueName: \"kubernetes.io/projected/4a3be9f1-bd40-4667-bdd7-2cf23292fab5-kube-api-access-dcbpf\") pod \"openshift-config-operator-7777fb866f-rn9s4\" (UID: \"4a3be9f1-bd40-4667-bdd7-2cf23292fab5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.682997 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwpmd\" (UniqueName: \"kubernetes.io/projected/4f4d90ef-dfaa-4a6b-8e9f-dc4e4039da47-kube-api-access-nwpmd\") pod \"openshift-controller-manager-operator-756b6f6bc6-rck29\" (UID: \"4f4d90ef-dfaa-4a6b-8e9f-dc4e4039da47\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rck29" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.683017 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4d23e44d-fbe6-40d1-8d6e-bf19cc751be8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-bdjcm\" (UID: \"4d23e44d-fbe6-40d1-8d6e-bf19cc751be8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bdjcm" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.683033 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1b628dc-8ac5-4463-bcdd-b573fa6c1e80-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bhzlz\" (UID: \"b1b628dc-8ac5-4463-bcdd-b573fa6c1e80\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bhzlz" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.683047 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw6dm\" (UniqueName: \"kubernetes.io/projected/d0267c2e-5b07-4578-bc73-2504b5300313-kube-api-access-hw6dm\") pod \"apiserver-76f77b778f-hzv4j\" (UID: \"d0267c2e-5b07-4578-bc73-2504b5300313\") " pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.683064 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgvmp\" (UniqueName: \"kubernetes.io/projected/265d9231-d5db-4cdb-80b8-dfd95dffa386-kube-api-access-bgvmp\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.683081 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2b9d0f20-53d1-4142-b961-55d553553aed-images\") pod \"machine-api-operator-5694c8668f-s6bks\" (UID: \"2b9d0f20-53d1-4142-b961-55d553553aed\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s6bks" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.683094 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/63191221-7520-4517-aeed-6d3896c2cad1-etcd-client\") pod \"apiserver-7bbb656c7d-2vpl2\" (UID: \"63191221-7520-4517-aeed-6d3896c2cad1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2vpl2" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.683107 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-service-ca\") pod \"console-f9d7485db-nkbdc\" (UID: \"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b\") " pod="openshift-console/console-f9d7485db-nkbdc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.683175 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd235a24-175b-4983-980e-2630b3c5b39f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8jk9c\" (UID: \"dd235a24-175b-4983-980e-2630b3c5b39f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jk9c" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.683203 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7pt2\" (UniqueName: \"kubernetes.io/projected/22d49062-540d-414e-b0c6-2c20d411fa71-kube-api-access-j7pt2\") pod \"authentication-operator-69f744f599-dqjws\" (UID: \"22d49062-540d-414e-b0c6-2c20d411fa71\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dqjws" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.683463 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d0267c2e-5b07-4578-bc73-2504b5300313-audit-dir\") pod \"apiserver-76f77b778f-hzv4j\" (UID: \"d0267c2e-5b07-4578-bc73-2504b5300313\") " pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.683485 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f97d3be8-69cc-4005-aa61-9ff3f6c72287-config\") pod \"machine-approver-56656f9798-6n75g\" (UID: \"f97d3be8-69cc-4005-aa61-9ff3f6c72287\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6n75g" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.683500 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f4d90ef-dfaa-4a6b-8e9f-dc4e4039da47-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rck29\" (UID: \"4f4d90ef-dfaa-4a6b-8e9f-dc4e4039da47\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rck29" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.683525 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23b08d0a-4aa5-43be-a498-55e54d6e8c31-trusted-ca\") pod \"console-operator-58897d9998-w7xl2\" (UID: \"23b08d0a-4aa5-43be-a498-55e54d6e8c31\") " pod="openshift-console-operator/console-operator-58897d9998-w7xl2" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.683556 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8jk9c"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.683578 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b9d0f20-53d1-4142-b961-55d553553aed-config\") pod \"machine-api-operator-5694c8668f-s6bks\" (UID: \"2b9d0f20-53d1-4142-b961-55d553553aed\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s6bks" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.683597 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23b08d0a-4aa5-43be-a498-55e54d6e8c31-config\") pod \"console-operator-58897d9998-w7xl2\" (UID: \"23b08d0a-4aa5-43be-a498-55e54d6e8c31\") " pod="openshift-console-operator/console-operator-58897d9998-w7xl2" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.683656 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a7374ef9-1396-4293-b711-fb07eaa512d0-etcd-service-ca\") pod \"etcd-operator-b45778765-w62kb\" (UID: \"a7374ef9-1396-4293-b711-fb07eaa512d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w62kb" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.683672 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksvln\" (UniqueName: \"kubernetes.io/projected/23b08d0a-4aa5-43be-a498-55e54d6e8c31-kube-api-access-ksvln\") pod \"console-operator-58897d9998-w7xl2\" (UID: \"23b08d0a-4aa5-43be-a498-55e54d6e8c31\") " pod="openshift-console-operator/console-operator-58897d9998-w7xl2" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.683687 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a7374ef9-1396-4293-b711-fb07eaa512d0-etcd-client\") pod \"etcd-operator-b45778765-w62kb\" (UID: \"a7374ef9-1396-4293-b711-fb07eaa512d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w62kb" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.683701 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2b9d0f20-53d1-4142-b961-55d553553aed-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-s6bks\" (UID: \"2b9d0f20-53d1-4142-b961-55d553553aed\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s6bks" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.683718 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f97d3be8-69cc-4005-aa61-9ff3f6c72287-auth-proxy-config\") pod \"machine-approver-56656f9798-6n75g\" (UID: \"f97d3be8-69cc-4005-aa61-9ff3f6c72287\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6n75g" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.684362 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd235a24-175b-4983-980e-2630b3c5b39f-client-ca\") pod \"controller-manager-879f6c89f-8jk9c\" (UID: \"dd235a24-175b-4983-980e-2630b3c5b39f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jk9c" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.684322 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f97d3be8-69cc-4005-aa61-9ff3f6c72287-auth-proxy-config\") pod \"machine-approver-56656f9798-6n75g\" (UID: \"f97d3be8-69cc-4005-aa61-9ff3f6c72287\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6n75g" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.684490 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/265d9231-d5db-4cdb-80b8-dfd95dffa386-audit-dir\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.685387 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.685482 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd235a24-175b-4983-980e-2630b3c5b39f-config\") pod \"controller-manager-879f6c89f-8jk9c\" (UID: \"dd235a24-175b-4983-980e-2630b3c5b39f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jk9c" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.685947 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22d49062-540d-414e-b0c6-2c20d411fa71-config\") pod \"authentication-operator-69f744f599-dqjws\" (UID: \"22d49062-540d-414e-b0c6-2c20d411fa71\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dqjws" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.687683 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd235a24-175b-4983-980e-2630b3c5b39f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8jk9c\" (UID: \"dd235a24-175b-4983-980e-2630b3c5b39f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jk9c" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.688078 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f97d3be8-69cc-4005-aa61-9ff3f6c72287-config\") pod \"machine-approver-56656f9798-6n75g\" (UID: \"f97d3be8-69cc-4005-aa61-9ff3f6c72287\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6n75g" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.689127 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23b08d0a-4aa5-43be-a498-55e54d6e8c31-config\") pod \"console-operator-58897d9998-w7xl2\" (UID: \"23b08d0a-4aa5-43be-a498-55e54d6e8c31\") " pod="openshift-console-operator/console-operator-58897d9998-w7xl2" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.690088 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23b08d0a-4aa5-43be-a498-55e54d6e8c31-trusted-ca\") pod \"console-operator-58897d9998-w7xl2\" (UID: \"23b08d0a-4aa5-43be-a498-55e54d6e8c31\") " pod="openshift-console-operator/console-operator-58897d9998-w7xl2" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.683580 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-lflpb"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.690146 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2vpl2"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.690158 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ll7nf"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.690926 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.691863 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d63d73a7-c813-4983-bccf-805604f7d593-config\") pod \"route-controller-manager-6576b87f9c-pqjqj\" (UID: \"d63d73a7-c813-4983-bccf-805604f7d593\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqjqj" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.692895 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a3be9f1-bd40-4667-bdd7-2cf23292fab5-serving-cert\") pod \"openshift-config-operator-7777fb866f-rn9s4\" (UID: \"4a3be9f1-bd40-4667-bdd7-2cf23292fab5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.693118 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4a3be9f1-bd40-4667-bdd7-2cf23292fab5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-rn9s4\" (UID: \"4a3be9f1-bd40-4667-bdd7-2cf23292fab5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.693549 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/265d9231-d5db-4cdb-80b8-dfd95dffa386-audit-policies\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.695452 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.695982 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f4d90ef-dfaa-4a6b-8e9f-dc4e4039da47-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rck29\" (UID: \"4f4d90ef-dfaa-4a6b-8e9f-dc4e4039da47\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rck29" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.696821 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22d49062-540d-414e-b0c6-2c20d411fa71-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-dqjws\" (UID: \"22d49062-540d-414e-b0c6-2c20d411fa71\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dqjws" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.696846 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f97d3be8-69cc-4005-aa61-9ff3f6c72287-machine-approver-tls\") pod \"machine-approver-56656f9798-6n75g\" (UID: \"f97d3be8-69cc-4005-aa61-9ff3f6c72287\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6n75g" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.697100 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.700353 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.700770 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.701091 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f4d90ef-dfaa-4a6b-8e9f-dc4e4039da47-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rck29\" (UID: \"4f4d90ef-dfaa-4a6b-8e9f-dc4e4039da47\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rck29" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.701487 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22d49062-540d-414e-b0c6-2c20d411fa71-service-ca-bundle\") pod \"authentication-operator-69f744f599-dqjws\" (UID: \"22d49062-540d-414e-b0c6-2c20d411fa71\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dqjws" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.702169 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d63d73a7-c813-4983-bccf-805604f7d593-client-ca\") pod \"route-controller-manager-6576b87f9c-pqjqj\" (UID: \"d63d73a7-c813-4983-bccf-805604f7d593\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqjqj" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.702970 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.703260 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b8ab10e4-5a02-445b-8788-1ed64c22c9e3-metrics-tls\") pod \"dns-operator-744455d44c-ll7nf\" (UID: \"b8ab10e4-5a02-445b-8788-1ed64c22c9e3\") " pod="openshift-dns-operator/dns-operator-744455d44c-ll7nf" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.704730 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.705109 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d63d73a7-c813-4983-bccf-805604f7d593-serving-cert\") pod \"route-controller-manager-6576b87f9c-pqjqj\" (UID: \"d63d73a7-c813-4983-bccf-805604f7d593\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqjqj" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.709963 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22d49062-540d-414e-b0c6-2c20d411fa71-serving-cert\") pod \"authentication-operator-69f744f599-dqjws\" (UID: \"22d49062-540d-414e-b0c6-2c20d411fa71\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dqjws" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.710357 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.715898 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.717650 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-sks8c"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.717775 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-dqjws"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.717858 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-bd6fq"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.718563 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-4526b"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.718669 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-bd6fq" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.719177 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.719297 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bdjcm"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.719348 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.719387 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6sjr4"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.719574 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-4526b" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.719811 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qv6cz"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.720729 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-w7xl2"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.720822 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ljplq"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.720899 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-nkbdc"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.720975 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rck29"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.721147 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-s6bks"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.721267 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.720232 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23b08d0a-4aa5-43be-a498-55e54d6e8c31-serving-cert\") pod \"console-operator-58897d9998-w7xl2\" (UID: \"23b08d0a-4aa5-43be-a498-55e54d6e8c31\") " pod="openshift-console-operator/console-operator-58897d9998-w7xl2" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.720190 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd235a24-175b-4983-980e-2630b3c5b39f-serving-cert\") pod \"controller-manager-879f6c89f-8jk9c\" (UID: \"dd235a24-175b-4983-980e-2630b3c5b39f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jk9c" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.725023 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.726678 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjb69"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.727121 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.728414 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495925-q62ms"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.729397 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bhzlz"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.731070 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-54cnn"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.732130 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-fbccj"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.733292 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-x24fr"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.734135 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-x24fr" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.742083 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hzv4j"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.743143 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-w62kb"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.744061 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wvf85"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.744971 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qln6b"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.745402 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.746496 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8pt4x"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.747589 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-bd6fq"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.748496 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jhnpn"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.749339 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hjhkn"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.750296 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4pxnp"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.751117 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4m8ns"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.752020 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nc9qp"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.752929 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qgkcs"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.753809 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fd76j"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.754649 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-x24fr"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.755467 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-85d5l"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.756262 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-cr54l"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.757388 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-cr54l"] Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.757452 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-cr54l" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.764995 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.784508 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj2dg\" (UniqueName: \"kubernetes.io/projected/4d23e44d-fbe6-40d1-8d6e-bf19cc751be8-kube-api-access-vj2dg\") pod \"cluster-image-registry-operator-dc59b4c8b-bdjcm\" (UID: \"4d23e44d-fbe6-40d1-8d6e-bf19cc751be8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bdjcm" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.784557 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-console-oauth-config\") pod \"console-f9d7485db-nkbdc\" (UID: \"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b\") " pod="openshift-console/console-f9d7485db-nkbdc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.784579 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f6039d5-8443-430a-9f72-26ffc3e3310c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ljplq\" (UID: \"2f6039d5-8443-430a-9f72-26ffc3e3310c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ljplq" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.784598 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st5wm\" (UniqueName: \"kubernetes.io/projected/86dea262-c989-43a8-ae6e-e744012a5e07-kube-api-access-st5wm\") pod \"packageserver-d55dfcdfc-kcrth\" (UID: \"86dea262-c989-43a8-ae6e-e744012a5e07\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.784622 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/63191221-7520-4517-aeed-6d3896c2cad1-encryption-config\") pod \"apiserver-7bbb656c7d-2vpl2\" (UID: \"63191221-7520-4517-aeed-6d3896c2cad1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2vpl2" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.784640 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0267c2e-5b07-4578-bc73-2504b5300313-serving-cert\") pod \"apiserver-76f77b778f-hzv4j\" (UID: \"d0267c2e-5b07-4578-bc73-2504b5300313\") " pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.784656 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7229bd1-5891-4654-ad14-c0efed77e9b7-metrics-certs\") pod \"router-default-5444994796-z67kf\" (UID: \"a7229bd1-5891-4654-ad14-c0efed77e9b7\") " pod="openshift-ingress/router-default-5444994796-z67kf" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.784677 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e209fbc5-b75f-4fe7-829b-351ce502929e-certs\") pod \"machine-config-server-4526b\" (UID: \"e209fbc5-b75f-4fe7-829b-351ce502929e\") " pod="openshift-machine-config-operator/machine-config-server-4526b" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.784692 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llwpp\" (UniqueName: \"kubernetes.io/projected/63191221-7520-4517-aeed-6d3896c2cad1-kube-api-access-llwpp\") pod \"apiserver-7bbb656c7d-2vpl2\" (UID: \"63191221-7520-4517-aeed-6d3896c2cad1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2vpl2" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.784710 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-oauth-serving-cert\") pod \"console-f9d7485db-nkbdc\" (UID: \"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b\") " pod="openshift-console/console-f9d7485db-nkbdc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.784725 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/350b6a45-2c99-453a-9e85-e97a1adc863d-secret-volume\") pod \"collect-profiles-29495925-q62ms\" (UID: \"350b6a45-2c99-453a-9e85-e97a1adc863d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495925-q62ms" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.784786 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/63191221-7520-4517-aeed-6d3896c2cad1-audit-dir\") pod \"apiserver-7bbb656c7d-2vpl2\" (UID: \"63191221-7520-4517-aeed-6d3896c2cad1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2vpl2" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.784863 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbjpk\" (UniqueName: \"kubernetes.io/projected/dbaed70c-7770-412b-b469-4e5bedbb7df7-kube-api-access-rbjpk\") pod \"control-plane-machine-set-operator-78cbb6b69f-qgkcs\" (UID: \"dbaed70c-7770-412b-b469-4e5bedbb7df7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qgkcs" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.784897 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/755be20c-e623-49b4-8c1b-97f651a664f7-config\") pod \"kube-controller-manager-operator-78b949d7b-qv6cz\" (UID: \"755be20c-e623-49b4-8c1b-97f651a664f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qv6cz" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.784944 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmfsg\" (UniqueName: \"kubernetes.io/projected/622e7434-1ad5-41f3-9c60-bfafb7b6dd3a-kube-api-access-bmfsg\") pod \"kube-storage-version-migrator-operator-b67b599dd-6sjr4\" (UID: \"622e7434-1ad5-41f3-9c60-bfafb7b6dd3a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6sjr4" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.784961 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5dfff538-11e7-4c6b-9db0-c26e2f6b6140-srv-cert\") pod \"olm-operator-6b444d44fb-bjb69\" (UID: \"5dfff538-11e7-4c6b-9db0-c26e2f6b6140\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjb69" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.784976 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a7229bd1-5891-4654-ad14-c0efed77e9b7-stats-auth\") pod \"router-default-5444994796-z67kf\" (UID: \"a7229bd1-5891-4654-ad14-c0efed77e9b7\") " pod="openshift-ingress/router-default-5444994796-z67kf" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.784996 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf96x\" (UniqueName: \"kubernetes.io/projected/2b9d0f20-53d1-4142-b961-55d553553aed-kube-api-access-kf96x\") pod \"machine-api-operator-5694c8668f-s6bks\" (UID: \"2b9d0f20-53d1-4142-b961-55d553553aed\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s6bks" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.785050 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/755be20c-e623-49b4-8c1b-97f651a664f7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qv6cz\" (UID: \"755be20c-e623-49b4-8c1b-97f651a664f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qv6cz" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.785078 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7b49a935-c5ef-4290-a394-ff47774b9172-images\") pod \"machine-config-operator-74547568cd-hjhkn\" (UID: \"7b49a935-c5ef-4290-a394-ff47774b9172\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hjhkn" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.785098 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/622e7434-1ad5-41f3-9c60-bfafb7b6dd3a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6sjr4\" (UID: \"622e7434-1ad5-41f3-9c60-bfafb7b6dd3a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6sjr4" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.785121 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7374ef9-1396-4293-b711-fb07eaa512d0-serving-cert\") pod \"etcd-operator-b45778765-w62kb\" (UID: \"a7374ef9-1396-4293-b711-fb07eaa512d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w62kb" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.785137 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d0267c2e-5b07-4578-bc73-2504b5300313-encryption-config\") pod \"apiserver-76f77b778f-hzv4j\" (UID: \"d0267c2e-5b07-4578-bc73-2504b5300313\") " pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.785153 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/63191221-7520-4517-aeed-6d3896c2cad1-audit-policies\") pod \"apiserver-7bbb656c7d-2vpl2\" (UID: \"63191221-7520-4517-aeed-6d3896c2cad1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2vpl2" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.785168 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hwqj\" (UniqueName: \"kubernetes.io/projected/b1b628dc-8ac5-4463-bcdd-b573fa6c1e80-kube-api-access-5hwqj\") pod \"openshift-apiserver-operator-796bbdcf4f-bhzlz\" (UID: \"b1b628dc-8ac5-4463-bcdd-b573fa6c1e80\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bhzlz" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.785184 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/350b6a45-2c99-453a-9e85-e97a1adc863d-config-volume\") pod \"collect-profiles-29495925-q62ms\" (UID: \"350b6a45-2c99-453a-9e85-e97a1adc863d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495925-q62ms" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.785203 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-console-config\") pod \"console-f9d7485db-nkbdc\" (UID: \"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b\") " pod="openshift-console/console-f9d7485db-nkbdc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.785242 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/86dea262-c989-43a8-ae6e-e744012a5e07-tmpfs\") pod \"packageserver-d55dfcdfc-kcrth\" (UID: \"86dea262-c989-43a8-ae6e-e744012a5e07\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.785260 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2f78b20-5b64-4fb1-8b47-9053654b33a5-serving-cert\") pod \"service-ca-operator-777779d784-fbccj\" (UID: \"a2f78b20-5b64-4fb1-8b47-9053654b33a5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fbccj" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.785278 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d0267c2e-5b07-4578-bc73-2504b5300313-etcd-client\") pod \"apiserver-76f77b778f-hzv4j\" (UID: \"d0267c2e-5b07-4578-bc73-2504b5300313\") " pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.785294 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e209fbc5-b75f-4fe7-829b-351ce502929e-node-bootstrap-token\") pod \"machine-config-server-4526b\" (UID: \"e209fbc5-b75f-4fe7-829b-351ce502929e\") " pod="openshift-machine-config-operator/machine-config-server-4526b" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.785312 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vddvg\" (UniqueName: \"kubernetes.io/projected/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-kube-api-access-vddvg\") pod \"console-f9d7485db-nkbdc\" (UID: \"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b\") " pod="openshift-console/console-f9d7485db-nkbdc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.785328 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54l84\" (UniqueName: \"kubernetes.io/projected/ba04cf12-8677-4024-9c2c-618dfc096d4d-kube-api-access-54l84\") pod \"catalog-operator-68c6474976-qln6b\" (UID: \"ba04cf12-8677-4024-9c2c-618dfc096d4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qln6b" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.785344 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2r6p\" (UniqueName: \"kubernetes.io/projected/7b49a935-c5ef-4290-a394-ff47774b9172-kube-api-access-x2r6p\") pod \"machine-config-operator-74547568cd-hjhkn\" (UID: \"7b49a935-c5ef-4290-a394-ff47774b9172\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hjhkn" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.785371 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-trusted-ca-bundle\") pod \"console-f9d7485db-nkbdc\" (UID: \"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b\") " pod="openshift-console/console-f9d7485db-nkbdc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.785389 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f6039d5-8443-430a-9f72-26ffc3e3310c-config\") pod \"kube-apiserver-operator-766d6c64bb-ljplq\" (UID: \"2f6039d5-8443-430a-9f72-26ffc3e3310c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ljplq" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.785407 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kphsk\" (UniqueName: \"kubernetes.io/projected/e209fbc5-b75f-4fe7-829b-351ce502929e-kube-api-access-kphsk\") pod \"machine-config-server-4526b\" (UID: \"e209fbc5-b75f-4fe7-829b-351ce502929e\") " pod="openshift-machine-config-operator/machine-config-server-4526b" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.785422 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ba04cf12-8677-4024-9c2c-618dfc096d4d-srv-cert\") pod \"catalog-operator-68c6474976-qln6b\" (UID: \"ba04cf12-8677-4024-9c2c-618dfc096d4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qln6b" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.785439 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7229bd1-5891-4654-ad14-c0efed77e9b7-service-ca-bundle\") pod \"router-default-5444994796-z67kf\" (UID: \"a7229bd1-5891-4654-ad14-c0efed77e9b7\") " pod="openshift-ingress/router-default-5444994796-z67kf" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.785460 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b0dc81d4-052e-46df-a17e-4461ccf8a64d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-sks8c\" (UID: \"b0dc81d4-052e-46df-a17e-4461ccf8a64d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-sks8c" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.785528 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d23e44d-fbe6-40d1-8d6e-bf19cc751be8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-bdjcm\" (UID: \"4d23e44d-fbe6-40d1-8d6e-bf19cc751be8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bdjcm" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.785559 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/86dea262-c989-43a8-ae6e-e744012a5e07-webhook-cert\") pod \"packageserver-d55dfcdfc-kcrth\" (UID: \"86dea262-c989-43a8-ae6e-e744012a5e07\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.785637 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4d23e44d-fbe6-40d1-8d6e-bf19cc751be8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-bdjcm\" (UID: \"4d23e44d-fbe6-40d1-8d6e-bf19cc751be8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bdjcm" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.785657 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-service-ca\") pod \"console-f9d7485db-nkbdc\" (UID: \"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b\") " pod="openshift-console/console-f9d7485db-nkbdc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.785819 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2b9d0f20-53d1-4142-b961-55d553553aed-images\") pod \"machine-api-operator-5694c8668f-s6bks\" (UID: \"2b9d0f20-53d1-4142-b961-55d553553aed\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s6bks" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786034 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d0267c2e-5b07-4578-bc73-2504b5300313-audit-dir\") pod \"apiserver-76f77b778f-hzv4j\" (UID: \"d0267c2e-5b07-4578-bc73-2504b5300313\") " pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786056 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2f78b20-5b64-4fb1-8b47-9053654b33a5-config\") pod \"service-ca-operator-777779d784-fbccj\" (UID: \"a2f78b20-5b64-4fb1-8b47-9053654b33a5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fbccj" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786066 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/63191221-7520-4517-aeed-6d3896c2cad1-audit-dir\") pod \"apiserver-7bbb656c7d-2vpl2\" (UID: \"63191221-7520-4517-aeed-6d3896c2cad1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2vpl2" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786075 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a7374ef9-1396-4293-b711-fb07eaa512d0-etcd-service-ca\") pod \"etcd-operator-b45778765-w62kb\" (UID: \"a7374ef9-1396-4293-b711-fb07eaa512d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w62kb" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786152 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a7374ef9-1396-4293-b711-fb07eaa512d0-etcd-client\") pod \"etcd-operator-b45778765-w62kb\" (UID: \"a7374ef9-1396-4293-b711-fb07eaa512d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w62kb" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786182 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxwrq\" (UniqueName: \"kubernetes.io/projected/a2f78b20-5b64-4fb1-8b47-9053654b33a5-kube-api-access-wxwrq\") pod \"service-ca-operator-777779d784-fbccj\" (UID: \"a2f78b20-5b64-4fb1-8b47-9053654b33a5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fbccj" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786202 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d0267c2e-5b07-4578-bc73-2504b5300313-image-import-ca\") pod \"apiserver-76f77b778f-hzv4j\" (UID: \"d0267c2e-5b07-4578-bc73-2504b5300313\") " pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786223 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0267c2e-5b07-4578-bc73-2504b5300313-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hzv4j\" (UID: \"d0267c2e-5b07-4578-bc73-2504b5300313\") " pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786241 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d200a37-0276-4e2c-b7ef-98107be3f313-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fd76j\" (UID: \"7d200a37-0276-4e2c-b7ef-98107be3f313\") " pod="openshift-marketplace/marketplace-operator-79b997595-fd76j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786262 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1b628dc-8ac5-4463-bcdd-b573fa6c1e80-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bhzlz\" (UID: \"b1b628dc-8ac5-4463-bcdd-b573fa6c1e80\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bhzlz" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786278 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5c4cb732-fc3d-4607-8051-d1ac81d4b9ad-metrics-tls\") pod \"dns-default-bd6fq\" (UID: \"5c4cb732-fc3d-4607-8051-d1ac81d4b9ad\") " pod="openshift-dns/dns-default-bd6fq" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786293 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03f68811-ba27-419e-afa9-1640c681b1fc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wvf85\" (UID: \"03f68811-ba27-419e-afa9-1640c681b1fc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wvf85" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786311 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbaed70c-7770-412b-b469-4e5bedbb7df7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qgkcs\" (UID: \"dbaed70c-7770-412b-b469-4e5bedbb7df7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qgkcs" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786351 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0c769d70-6c64-4e67-ad6a-cb99f70c31c0-signing-key\") pod \"service-ca-9c57cc56f-4m8ns\" (UID: \"0c769d70-6c64-4e67-ad6a-cb99f70c31c0\") " pod="openshift-service-ca/service-ca-9c57cc56f-4m8ns" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786368 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4267d7ff-3907-40fe-ac79-e30e74e13476-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jhnpn\" (UID: \"4267d7ff-3907-40fe-ac79-e30e74e13476\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jhnpn" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786386 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jnnc\" (UniqueName: \"kubernetes.io/projected/c033428a-1e35-46a7-a589-d2374d629f46-kube-api-access-4jnnc\") pod \"multus-admission-controller-857f4d67dd-85d5l\" (UID: \"c033428a-1e35-46a7-a589-d2374d629f46\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-85d5l" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786401 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn29t\" (UniqueName: \"kubernetes.io/projected/5c4cb732-fc3d-4607-8051-d1ac81d4b9ad-kube-api-access-jn29t\") pod \"dns-default-bd6fq\" (UID: \"5c4cb732-fc3d-4607-8051-d1ac81d4b9ad\") " pod="openshift-dns/dns-default-bd6fq" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786416 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4267d7ff-3907-40fe-ac79-e30e74e13476-trusted-ca\") pod \"ingress-operator-5b745b69d9-jhnpn\" (UID: \"4267d7ff-3907-40fe-ac79-e30e74e13476\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jhnpn" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786442 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv7s4\" (UniqueName: \"kubernetes.io/projected/b3470d5b-3e9f-4d41-a992-77b47e35ac52-kube-api-access-bv7s4\") pod \"migrator-59844c95c7-8pt4x\" (UID: \"b3470d5b-3e9f-4d41-a992-77b47e35ac52\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8pt4x" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786457 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/86dea262-c989-43a8-ae6e-e744012a5e07-apiservice-cert\") pod \"packageserver-d55dfcdfc-kcrth\" (UID: \"86dea262-c989-43a8-ae6e-e744012a5e07\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786475 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63191221-7520-4517-aeed-6d3896c2cad1-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2vpl2\" (UID: \"63191221-7520-4517-aeed-6d3896c2cad1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2vpl2" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786491 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0267c2e-5b07-4578-bc73-2504b5300313-config\") pod \"apiserver-76f77b778f-hzv4j\" (UID: \"d0267c2e-5b07-4578-bc73-2504b5300313\") " pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786507 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63191221-7520-4517-aeed-6d3896c2cad1-serving-cert\") pod \"apiserver-7bbb656c7d-2vpl2\" (UID: \"63191221-7520-4517-aeed-6d3896c2cad1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2vpl2" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786546 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03f68811-ba27-419e-afa9-1640c681b1fc-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wvf85\" (UID: \"03f68811-ba27-419e-afa9-1640c681b1fc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wvf85" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786564 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p8fg\" (UniqueName: \"kubernetes.io/projected/b0dc81d4-052e-46df-a17e-4461ccf8a64d-kube-api-access-9p8fg\") pod \"cluster-samples-operator-665b6dd947-sks8c\" (UID: \"b0dc81d4-052e-46df-a17e-4461ccf8a64d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-sks8c" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786581 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mwtz\" (UniqueName: \"kubernetes.io/projected/a7374ef9-1396-4293-b711-fb07eaa512d0-kube-api-access-5mwtz\") pod \"etcd-operator-b45778765-w62kb\" (UID: \"a7374ef9-1396-4293-b711-fb07eaa512d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w62kb" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786598 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/63191221-7520-4517-aeed-6d3896c2cad1-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2vpl2\" (UID: \"63191221-7520-4517-aeed-6d3896c2cad1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2vpl2" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786613 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/755be20c-e623-49b4-8c1b-97f651a664f7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qv6cz\" (UID: \"755be20c-e623-49b4-8c1b-97f651a664f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qv6cz" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786630 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/622e7434-1ad5-41f3-9c60-bfafb7b6dd3a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6sjr4\" (UID: \"622e7434-1ad5-41f3-9c60-bfafb7b6dd3a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6sjr4" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786646 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5dfff538-11e7-4c6b-9db0-c26e2f6b6140-profile-collector-cert\") pod \"olm-operator-6b444d44fb-bjb69\" (UID: \"5dfff538-11e7-4c6b-9db0-c26e2f6b6140\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjb69" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786662 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a7374ef9-1396-4293-b711-fb07eaa512d0-etcd-ca\") pod \"etcd-operator-b45778765-w62kb\" (UID: \"a7374ef9-1396-4293-b711-fb07eaa512d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w62kb" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786680 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vf2f\" (UniqueName: \"kubernetes.io/projected/82561e0e-8f14-4e88-adbb-b0a2b3d8760c-kube-api-access-5vf2f\") pod \"package-server-manager-789f6589d5-nc9qp\" (UID: \"82561e0e-8f14-4e88-adbb-b0a2b3d8760c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nc9qp" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786696 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7b49a935-c5ef-4290-a394-ff47774b9172-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hjhkn\" (UID: \"7b49a935-c5ef-4290-a394-ff47774b9172\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hjhkn" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786716 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c033428a-1e35-46a7-a589-d2374d629f46-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-85d5l\" (UID: \"c033428a-1e35-46a7-a589-d2374d629f46\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-85d5l" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786731 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f6039d5-8443-430a-9f72-26ffc3e3310c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ljplq\" (UID: \"2f6039d5-8443-430a-9f72-26ffc3e3310c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ljplq" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786723 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-oauth-serving-cert\") pod \"console-f9d7485db-nkbdc\" (UID: \"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b\") " pod="openshift-console/console-f9d7485db-nkbdc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786748 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ba04cf12-8677-4024-9c2c-618dfc096d4d-profile-collector-cert\") pod \"catalog-operator-68c6474976-qln6b\" (UID: \"ba04cf12-8677-4024-9c2c-618dfc096d4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qln6b" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786839 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8766j\" (UniqueName: \"kubernetes.io/projected/4267d7ff-3907-40fe-ac79-e30e74e13476-kube-api-access-8766j\") pod \"ingress-operator-5b745b69d9-jhnpn\" (UID: \"4267d7ff-3907-40fe-ac79-e30e74e13476\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jhnpn" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786873 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7374ef9-1396-4293-b711-fb07eaa512d0-config\") pod \"etcd-operator-b45778765-w62kb\" (UID: \"a7374ef9-1396-4293-b711-fb07eaa512d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w62kb" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786875 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a7374ef9-1396-4293-b711-fb07eaa512d0-etcd-service-ca\") pod \"etcd-operator-b45778765-w62kb\" (UID: \"a7374ef9-1396-4293-b711-fb07eaa512d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w62kb" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786917 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsqcj\" (UniqueName: \"kubernetes.io/projected/0c769d70-6c64-4e67-ad6a-cb99f70c31c0-kube-api-access-jsqcj\") pod \"service-ca-9c57cc56f-4m8ns\" (UID: \"0c769d70-6c64-4e67-ad6a-cb99f70c31c0\") " pod="openshift-service-ca/service-ca-9c57cc56f-4m8ns" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786940 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7d200a37-0276-4e2c-b7ef-98107be3f313-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fd76j\" (UID: \"7d200a37-0276-4e2c-b7ef-98107be3f313\") " pod="openshift-marketplace/marketplace-operator-79b997595-fd76j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786962 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-console-serving-cert\") pod \"console-f9d7485db-nkbdc\" (UID: \"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b\") " pod="openshift-console/console-f9d7485db-nkbdc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.786999 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d0267c2e-5b07-4578-bc73-2504b5300313-node-pullsecrets\") pod \"apiserver-76f77b778f-hzv4j\" (UID: \"d0267c2e-5b07-4578-bc73-2504b5300313\") " pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.787018 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4267d7ff-3907-40fe-ac79-e30e74e13476-metrics-tls\") pod \"ingress-operator-5b745b69d9-jhnpn\" (UID: \"4267d7ff-3907-40fe-ac79-e30e74e13476\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jhnpn" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.787035 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8htt\" (UniqueName: \"kubernetes.io/projected/5dfff538-11e7-4c6b-9db0-c26e2f6b6140-kube-api-access-p8htt\") pod \"olm-operator-6b444d44fb-bjb69\" (UID: \"5dfff538-11e7-4c6b-9db0-c26e2f6b6140\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjb69" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.787077 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d23e44d-fbe6-40d1-8d6e-bf19cc751be8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-bdjcm\" (UID: \"4d23e44d-fbe6-40d1-8d6e-bf19cc751be8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bdjcm" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.787093 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d0267c2e-5b07-4578-bc73-2504b5300313-audit\") pod \"apiserver-76f77b778f-hzv4j\" (UID: \"d0267c2e-5b07-4578-bc73-2504b5300313\") " pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.787114 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d0267c2e-5b07-4578-bc73-2504b5300313-etcd-serving-ca\") pod \"apiserver-76f77b778f-hzv4j\" (UID: \"d0267c2e-5b07-4578-bc73-2504b5300313\") " pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.787159 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccqqt\" (UniqueName: \"kubernetes.io/projected/7d200a37-0276-4e2c-b7ef-98107be3f313-kube-api-access-ccqqt\") pod \"marketplace-operator-79b997595-fd76j\" (UID: \"7d200a37-0276-4e2c-b7ef-98107be3f313\") " pod="openshift-marketplace/marketplace-operator-79b997595-fd76j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.787176 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a7229bd1-5891-4654-ad14-c0efed77e9b7-default-certificate\") pod \"router-default-5444994796-z67kf\" (UID: \"a7229bd1-5891-4654-ad14-c0efed77e9b7\") " pod="openshift-ingress/router-default-5444994796-z67kf" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.787197 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1b628dc-8ac5-4463-bcdd-b573fa6c1e80-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bhzlz\" (UID: \"b1b628dc-8ac5-4463-bcdd-b573fa6c1e80\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bhzlz" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.787337 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw6dm\" (UniqueName: \"kubernetes.io/projected/d0267c2e-5b07-4578-bc73-2504b5300313-kube-api-access-hw6dm\") pod \"apiserver-76f77b778f-hzv4j\" (UID: \"d0267c2e-5b07-4578-bc73-2504b5300313\") " pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.787359 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/63191221-7520-4517-aeed-6d3896c2cad1-etcd-client\") pod \"apiserver-7bbb656c7d-2vpl2\" (UID: \"63191221-7520-4517-aeed-6d3896c2cad1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2vpl2" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.787546 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c4cb732-fc3d-4607-8051-d1ac81d4b9ad-config-volume\") pod \"dns-default-bd6fq\" (UID: \"5c4cb732-fc3d-4607-8051-d1ac81d4b9ad\") " pod="openshift-dns/dns-default-bd6fq" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.787574 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0c769d70-6c64-4e67-ad6a-cb99f70c31c0-signing-cabundle\") pod \"service-ca-9c57cc56f-4m8ns\" (UID: \"0c769d70-6c64-4e67-ad6a-cb99f70c31c0\") " pod="openshift-service-ca/service-ca-9c57cc56f-4m8ns" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.787596 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b9d0f20-53d1-4142-b961-55d553553aed-config\") pod \"machine-api-operator-5694c8668f-s6bks\" (UID: \"2b9d0f20-53d1-4142-b961-55d553553aed\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s6bks" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.787718 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/82561e0e-8f14-4e88-adbb-b0a2b3d8760c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nc9qp\" (UID: \"82561e0e-8f14-4e88-adbb-b0a2b3d8760c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nc9qp" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.787739 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7b49a935-c5ef-4290-a394-ff47774b9172-proxy-tls\") pod \"machine-config-operator-74547568cd-hjhkn\" (UID: \"7b49a935-c5ef-4290-a394-ff47774b9172\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hjhkn" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.787765 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2b9d0f20-53d1-4142-b961-55d553553aed-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-s6bks\" (UID: \"2b9d0f20-53d1-4142-b961-55d553553aed\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s6bks" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.787890 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgnhr\" (UniqueName: \"kubernetes.io/projected/350b6a45-2c99-453a-9e85-e97a1adc863d-kube-api-access-xgnhr\") pod \"collect-profiles-29495925-q62ms\" (UID: \"350b6a45-2c99-453a-9e85-e97a1adc863d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495925-q62ms" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.787908 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmjdp\" (UniqueName: \"kubernetes.io/projected/a7229bd1-5891-4654-ad14-c0efed77e9b7-kube-api-access-xmjdp\") pod \"router-default-5444994796-z67kf\" (UID: \"a7229bd1-5891-4654-ad14-c0efed77e9b7\") " pod="openshift-ingress/router-default-5444994796-z67kf" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.788050 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03f68811-ba27-419e-afa9-1640c681b1fc-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wvf85\" (UID: \"03f68811-ba27-419e-afa9-1640c681b1fc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wvf85" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.788344 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/63191221-7520-4517-aeed-6d3896c2cad1-encryption-config\") pod \"apiserver-7bbb656c7d-2vpl2\" (UID: \"63191221-7520-4517-aeed-6d3896c2cad1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2vpl2" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.789255 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7374ef9-1396-4293-b711-fb07eaa512d0-config\") pod \"etcd-operator-b45778765-w62kb\" (UID: \"a7374ef9-1396-4293-b711-fb07eaa512d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w62kb" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.789678 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d23e44d-fbe6-40d1-8d6e-bf19cc751be8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-bdjcm\" (UID: \"4d23e44d-fbe6-40d1-8d6e-bf19cc751be8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bdjcm" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.789744 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d0267c2e-5b07-4578-bc73-2504b5300313-node-pullsecrets\") pod \"apiserver-76f77b778f-hzv4j\" (UID: \"d0267c2e-5b07-4578-bc73-2504b5300313\") " pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.790073 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d0267c2e-5b07-4578-bc73-2504b5300313-audit\") pod \"apiserver-76f77b778f-hzv4j\" (UID: \"d0267c2e-5b07-4578-bc73-2504b5300313\") " pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.790368 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d0267c2e-5b07-4578-bc73-2504b5300313-etcd-serving-ca\") pod \"apiserver-76f77b778f-hzv4j\" (UID: \"d0267c2e-5b07-4578-bc73-2504b5300313\") " pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.790586 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d0267c2e-5b07-4578-bc73-2504b5300313-image-import-ca\") pod \"apiserver-76f77b778f-hzv4j\" (UID: \"d0267c2e-5b07-4578-bc73-2504b5300313\") " pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.790706 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-console-config\") pod \"console-f9d7485db-nkbdc\" (UID: \"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b\") " pod="openshift-console/console-f9d7485db-nkbdc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.790721 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1b628dc-8ac5-4463-bcdd-b573fa6c1e80-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bhzlz\" (UID: \"b1b628dc-8ac5-4463-bcdd-b573fa6c1e80\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bhzlz" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.790914 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7374ef9-1396-4293-b711-fb07eaa512d0-serving-cert\") pod \"etcd-operator-b45778765-w62kb\" (UID: \"a7374ef9-1396-4293-b711-fb07eaa512d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w62kb" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.791600 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a7374ef9-1396-4293-b711-fb07eaa512d0-etcd-ca\") pod \"etcd-operator-b45778765-w62kb\" (UID: \"a7374ef9-1396-4293-b711-fb07eaa512d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w62kb" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.791847 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/63191221-7520-4517-aeed-6d3896c2cad1-audit-policies\") pod \"apiserver-7bbb656c7d-2vpl2\" (UID: \"63191221-7520-4517-aeed-6d3896c2cad1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2vpl2" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.792124 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a7374ef9-1396-4293-b711-fb07eaa512d0-etcd-client\") pod \"etcd-operator-b45778765-w62kb\" (UID: \"a7374ef9-1396-4293-b711-fb07eaa512d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w62kb" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.792530 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-console-serving-cert\") pod \"console-f9d7485db-nkbdc\" (UID: \"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b\") " pod="openshift-console/console-f9d7485db-nkbdc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.792604 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.792715 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b9d0f20-53d1-4142-b961-55d553553aed-config\") pod \"machine-api-operator-5694c8668f-s6bks\" (UID: \"2b9d0f20-53d1-4142-b961-55d553553aed\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s6bks" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.793154 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/63191221-7520-4517-aeed-6d3896c2cad1-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2vpl2\" (UID: \"63191221-7520-4517-aeed-6d3896c2cad1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2vpl2" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.793637 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2b9d0f20-53d1-4142-b961-55d553553aed-images\") pod \"machine-api-operator-5694c8668f-s6bks\" (UID: \"2b9d0f20-53d1-4142-b961-55d553553aed\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s6bks" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.793674 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d0267c2e-5b07-4578-bc73-2504b5300313-audit-dir\") pod \"apiserver-76f77b778f-hzv4j\" (UID: \"d0267c2e-5b07-4578-bc73-2504b5300313\") " pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.793857 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b0dc81d4-052e-46df-a17e-4461ccf8a64d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-sks8c\" (UID: \"b0dc81d4-052e-46df-a17e-4461ccf8a64d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-sks8c" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.794178 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-trusted-ca-bundle\") pod \"console-f9d7485db-nkbdc\" (UID: \"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b\") " pod="openshift-console/console-f9d7485db-nkbdc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.794385 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-service-ca\") pod \"console-f9d7485db-nkbdc\" (UID: \"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b\") " pod="openshift-console/console-f9d7485db-nkbdc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.794922 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63191221-7520-4517-aeed-6d3896c2cad1-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2vpl2\" (UID: \"63191221-7520-4517-aeed-6d3896c2cad1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2vpl2" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.795162 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4d23e44d-fbe6-40d1-8d6e-bf19cc751be8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-bdjcm\" (UID: \"4d23e44d-fbe6-40d1-8d6e-bf19cc751be8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bdjcm" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.795923 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/63191221-7520-4517-aeed-6d3896c2cad1-etcd-client\") pod \"apiserver-7bbb656c7d-2vpl2\" (UID: \"63191221-7520-4517-aeed-6d3896c2cad1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2vpl2" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.796067 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2b9d0f20-53d1-4142-b961-55d553553aed-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-s6bks\" (UID: \"2b9d0f20-53d1-4142-b961-55d553553aed\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s6bks" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.796088 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-console-oauth-config\") pod \"console-f9d7485db-nkbdc\" (UID: \"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b\") " pod="openshift-console/console-f9d7485db-nkbdc" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.796205 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d0267c2e-5b07-4578-bc73-2504b5300313-encryption-config\") pod \"apiserver-76f77b778f-hzv4j\" (UID: \"d0267c2e-5b07-4578-bc73-2504b5300313\") " pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.796222 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1b628dc-8ac5-4463-bcdd-b573fa6c1e80-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bhzlz\" (UID: \"b1b628dc-8ac5-4463-bcdd-b573fa6c1e80\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bhzlz" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.798314 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63191221-7520-4517-aeed-6d3896c2cad1-serving-cert\") pod \"apiserver-7bbb656c7d-2vpl2\" (UID: \"63191221-7520-4517-aeed-6d3896c2cad1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2vpl2" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.800726 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0267c2e-5b07-4578-bc73-2504b5300313-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hzv4j\" (UID: \"d0267c2e-5b07-4578-bc73-2504b5300313\") " pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.804914 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.813708 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d0267c2e-5b07-4578-bc73-2504b5300313-etcd-client\") pod \"apiserver-76f77b778f-hzv4j\" (UID: \"d0267c2e-5b07-4578-bc73-2504b5300313\") " pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.845058 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.852136 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0267c2e-5b07-4578-bc73-2504b5300313-serving-cert\") pod \"apiserver-76f77b778f-hzv4j\" (UID: \"d0267c2e-5b07-4578-bc73-2504b5300313\") " pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.865099 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.885431 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.889619 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7d200a37-0276-4e2c-b7ef-98107be3f313-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fd76j\" (UID: \"7d200a37-0276-4e2c-b7ef-98107be3f313\") " pod="openshift-marketplace/marketplace-operator-79b997595-fd76j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.889650 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4267d7ff-3907-40fe-ac79-e30e74e13476-metrics-tls\") pod \"ingress-operator-5b745b69d9-jhnpn\" (UID: \"4267d7ff-3907-40fe-ac79-e30e74e13476\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jhnpn" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.889670 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8htt\" (UniqueName: \"kubernetes.io/projected/5dfff538-11e7-4c6b-9db0-c26e2f6b6140-kube-api-access-p8htt\") pod \"olm-operator-6b444d44fb-bjb69\" (UID: \"5dfff538-11e7-4c6b-9db0-c26e2f6b6140\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjb69" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.889698 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccqqt\" (UniqueName: \"kubernetes.io/projected/7d200a37-0276-4e2c-b7ef-98107be3f313-kube-api-access-ccqqt\") pod \"marketplace-operator-79b997595-fd76j\" (UID: \"7d200a37-0276-4e2c-b7ef-98107be3f313\") " pod="openshift-marketplace/marketplace-operator-79b997595-fd76j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.889713 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a7229bd1-5891-4654-ad14-c0efed77e9b7-default-certificate\") pod \"router-default-5444994796-z67kf\" (UID: \"a7229bd1-5891-4654-ad14-c0efed77e9b7\") " pod="openshift-ingress/router-default-5444994796-z67kf" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.889733 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c4cb732-fc3d-4607-8051-d1ac81d4b9ad-config-volume\") pod \"dns-default-bd6fq\" (UID: \"5c4cb732-fc3d-4607-8051-d1ac81d4b9ad\") " pod="openshift-dns/dns-default-bd6fq" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.889751 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0c769d70-6c64-4e67-ad6a-cb99f70c31c0-signing-cabundle\") pod \"service-ca-9c57cc56f-4m8ns\" (UID: \"0c769d70-6c64-4e67-ad6a-cb99f70c31c0\") " pod="openshift-service-ca/service-ca-9c57cc56f-4m8ns" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.889768 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/82561e0e-8f14-4e88-adbb-b0a2b3d8760c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nc9qp\" (UID: \"82561e0e-8f14-4e88-adbb-b0a2b3d8760c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nc9qp" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.889785 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7b49a935-c5ef-4290-a394-ff47774b9172-proxy-tls\") pod \"machine-config-operator-74547568cd-hjhkn\" (UID: \"7b49a935-c5ef-4290-a394-ff47774b9172\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hjhkn" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.889815 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgnhr\" (UniqueName: \"kubernetes.io/projected/350b6a45-2c99-453a-9e85-e97a1adc863d-kube-api-access-xgnhr\") pod \"collect-profiles-29495925-q62ms\" (UID: \"350b6a45-2c99-453a-9e85-e97a1adc863d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495925-q62ms" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.889830 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmjdp\" (UniqueName: \"kubernetes.io/projected/a7229bd1-5891-4654-ad14-c0efed77e9b7-kube-api-access-xmjdp\") pod \"router-default-5444994796-z67kf\" (UID: \"a7229bd1-5891-4654-ad14-c0efed77e9b7\") " pod="openshift-ingress/router-default-5444994796-z67kf" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.889849 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03f68811-ba27-419e-afa9-1640c681b1fc-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wvf85\" (UID: \"03f68811-ba27-419e-afa9-1640c681b1fc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wvf85" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.889868 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f6039d5-8443-430a-9f72-26ffc3e3310c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ljplq\" (UID: \"2f6039d5-8443-430a-9f72-26ffc3e3310c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ljplq" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.889882 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st5wm\" (UniqueName: \"kubernetes.io/projected/86dea262-c989-43a8-ae6e-e744012a5e07-kube-api-access-st5wm\") pod \"packageserver-d55dfcdfc-kcrth\" (UID: \"86dea262-c989-43a8-ae6e-e744012a5e07\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.889903 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7229bd1-5891-4654-ad14-c0efed77e9b7-metrics-certs\") pod \"router-default-5444994796-z67kf\" (UID: \"a7229bd1-5891-4654-ad14-c0efed77e9b7\") " pod="openshift-ingress/router-default-5444994796-z67kf" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.889923 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e209fbc5-b75f-4fe7-829b-351ce502929e-certs\") pod \"machine-config-server-4526b\" (UID: \"e209fbc5-b75f-4fe7-829b-351ce502929e\") " pod="openshift-machine-config-operator/machine-config-server-4526b" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.889945 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/350b6a45-2c99-453a-9e85-e97a1adc863d-secret-volume\") pod \"collect-profiles-29495925-q62ms\" (UID: \"350b6a45-2c99-453a-9e85-e97a1adc863d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495925-q62ms" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.889960 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbjpk\" (UniqueName: \"kubernetes.io/projected/dbaed70c-7770-412b-b469-4e5bedbb7df7-kube-api-access-rbjpk\") pod \"control-plane-machine-set-operator-78cbb6b69f-qgkcs\" (UID: \"dbaed70c-7770-412b-b469-4e5bedbb7df7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qgkcs" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.889974 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/755be20c-e623-49b4-8c1b-97f651a664f7-config\") pod \"kube-controller-manager-operator-78b949d7b-qv6cz\" (UID: \"755be20c-e623-49b4-8c1b-97f651a664f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qv6cz" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.889991 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmfsg\" (UniqueName: \"kubernetes.io/projected/622e7434-1ad5-41f3-9c60-bfafb7b6dd3a-kube-api-access-bmfsg\") pod \"kube-storage-version-migrator-operator-b67b599dd-6sjr4\" (UID: \"622e7434-1ad5-41f3-9c60-bfafb7b6dd3a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6sjr4" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890005 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5dfff538-11e7-4c6b-9db0-c26e2f6b6140-srv-cert\") pod \"olm-operator-6b444d44fb-bjb69\" (UID: \"5dfff538-11e7-4c6b-9db0-c26e2f6b6140\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjb69" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890019 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a7229bd1-5891-4654-ad14-c0efed77e9b7-stats-auth\") pod \"router-default-5444994796-z67kf\" (UID: \"a7229bd1-5891-4654-ad14-c0efed77e9b7\") " pod="openshift-ingress/router-default-5444994796-z67kf" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890040 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/755be20c-e623-49b4-8c1b-97f651a664f7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qv6cz\" (UID: \"755be20c-e623-49b4-8c1b-97f651a664f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qv6cz" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890054 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7b49a935-c5ef-4290-a394-ff47774b9172-images\") pod \"machine-config-operator-74547568cd-hjhkn\" (UID: \"7b49a935-c5ef-4290-a394-ff47774b9172\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hjhkn" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890069 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/622e7434-1ad5-41f3-9c60-bfafb7b6dd3a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6sjr4\" (UID: \"622e7434-1ad5-41f3-9c60-bfafb7b6dd3a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6sjr4" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890094 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/350b6a45-2c99-453a-9e85-e97a1adc863d-config-volume\") pod \"collect-profiles-29495925-q62ms\" (UID: \"350b6a45-2c99-453a-9e85-e97a1adc863d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495925-q62ms" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890111 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/86dea262-c989-43a8-ae6e-e744012a5e07-tmpfs\") pod \"packageserver-d55dfcdfc-kcrth\" (UID: \"86dea262-c989-43a8-ae6e-e744012a5e07\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890126 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2f78b20-5b64-4fb1-8b47-9053654b33a5-serving-cert\") pod \"service-ca-operator-777779d784-fbccj\" (UID: \"a2f78b20-5b64-4fb1-8b47-9053654b33a5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fbccj" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890143 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e209fbc5-b75f-4fe7-829b-351ce502929e-node-bootstrap-token\") pod \"machine-config-server-4526b\" (UID: \"e209fbc5-b75f-4fe7-829b-351ce502929e\") " pod="openshift-machine-config-operator/machine-config-server-4526b" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890163 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54l84\" (UniqueName: \"kubernetes.io/projected/ba04cf12-8677-4024-9c2c-618dfc096d4d-kube-api-access-54l84\") pod \"catalog-operator-68c6474976-qln6b\" (UID: \"ba04cf12-8677-4024-9c2c-618dfc096d4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qln6b" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890179 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2r6p\" (UniqueName: \"kubernetes.io/projected/7b49a935-c5ef-4290-a394-ff47774b9172-kube-api-access-x2r6p\") pod \"machine-config-operator-74547568cd-hjhkn\" (UID: \"7b49a935-c5ef-4290-a394-ff47774b9172\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hjhkn" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890206 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f6039d5-8443-430a-9f72-26ffc3e3310c-config\") pod \"kube-apiserver-operator-766d6c64bb-ljplq\" (UID: \"2f6039d5-8443-430a-9f72-26ffc3e3310c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ljplq" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890221 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kphsk\" (UniqueName: \"kubernetes.io/projected/e209fbc5-b75f-4fe7-829b-351ce502929e-kube-api-access-kphsk\") pod \"machine-config-server-4526b\" (UID: \"e209fbc5-b75f-4fe7-829b-351ce502929e\") " pod="openshift-machine-config-operator/machine-config-server-4526b" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890236 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ba04cf12-8677-4024-9c2c-618dfc096d4d-srv-cert\") pod \"catalog-operator-68c6474976-qln6b\" (UID: \"ba04cf12-8677-4024-9c2c-618dfc096d4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qln6b" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890253 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7229bd1-5891-4654-ad14-c0efed77e9b7-service-ca-bundle\") pod \"router-default-5444994796-z67kf\" (UID: \"a7229bd1-5891-4654-ad14-c0efed77e9b7\") " pod="openshift-ingress/router-default-5444994796-z67kf" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890274 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/86dea262-c989-43a8-ae6e-e744012a5e07-webhook-cert\") pod \"packageserver-d55dfcdfc-kcrth\" (UID: \"86dea262-c989-43a8-ae6e-e744012a5e07\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890325 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2f78b20-5b64-4fb1-8b47-9053654b33a5-config\") pod \"service-ca-operator-777779d784-fbccj\" (UID: \"a2f78b20-5b64-4fb1-8b47-9053654b33a5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fbccj" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890341 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxwrq\" (UniqueName: \"kubernetes.io/projected/a2f78b20-5b64-4fb1-8b47-9053654b33a5-kube-api-access-wxwrq\") pod \"service-ca-operator-777779d784-fbccj\" (UID: \"a2f78b20-5b64-4fb1-8b47-9053654b33a5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fbccj" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890359 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d200a37-0276-4e2c-b7ef-98107be3f313-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fd76j\" (UID: \"7d200a37-0276-4e2c-b7ef-98107be3f313\") " pod="openshift-marketplace/marketplace-operator-79b997595-fd76j" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890376 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5c4cb732-fc3d-4607-8051-d1ac81d4b9ad-metrics-tls\") pod \"dns-default-bd6fq\" (UID: \"5c4cb732-fc3d-4607-8051-d1ac81d4b9ad\") " pod="openshift-dns/dns-default-bd6fq" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890393 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03f68811-ba27-419e-afa9-1640c681b1fc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wvf85\" (UID: \"03f68811-ba27-419e-afa9-1640c681b1fc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wvf85" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890410 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbaed70c-7770-412b-b469-4e5bedbb7df7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qgkcs\" (UID: \"dbaed70c-7770-412b-b469-4e5bedbb7df7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qgkcs" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890432 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0c769d70-6c64-4e67-ad6a-cb99f70c31c0-signing-key\") pod \"service-ca-9c57cc56f-4m8ns\" (UID: \"0c769d70-6c64-4e67-ad6a-cb99f70c31c0\") " pod="openshift-service-ca/service-ca-9c57cc56f-4m8ns" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890447 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4267d7ff-3907-40fe-ac79-e30e74e13476-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jhnpn\" (UID: \"4267d7ff-3907-40fe-ac79-e30e74e13476\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jhnpn" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890464 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jnnc\" (UniqueName: \"kubernetes.io/projected/c033428a-1e35-46a7-a589-d2374d629f46-kube-api-access-4jnnc\") pod \"multus-admission-controller-857f4d67dd-85d5l\" (UID: \"c033428a-1e35-46a7-a589-d2374d629f46\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-85d5l" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890479 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn29t\" (UniqueName: \"kubernetes.io/projected/5c4cb732-fc3d-4607-8051-d1ac81d4b9ad-kube-api-access-jn29t\") pod \"dns-default-bd6fq\" (UID: \"5c4cb732-fc3d-4607-8051-d1ac81d4b9ad\") " pod="openshift-dns/dns-default-bd6fq" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890493 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4267d7ff-3907-40fe-ac79-e30e74e13476-trusted-ca\") pod \"ingress-operator-5b745b69d9-jhnpn\" (UID: \"4267d7ff-3907-40fe-ac79-e30e74e13476\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jhnpn" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890508 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv7s4\" (UniqueName: \"kubernetes.io/projected/b3470d5b-3e9f-4d41-a992-77b47e35ac52-kube-api-access-bv7s4\") pod \"migrator-59844c95c7-8pt4x\" (UID: \"b3470d5b-3e9f-4d41-a992-77b47e35ac52\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8pt4x" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890539 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/86dea262-c989-43a8-ae6e-e744012a5e07-apiservice-cert\") pod \"packageserver-d55dfcdfc-kcrth\" (UID: \"86dea262-c989-43a8-ae6e-e744012a5e07\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890564 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03f68811-ba27-419e-afa9-1640c681b1fc-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wvf85\" (UID: \"03f68811-ba27-419e-afa9-1640c681b1fc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wvf85" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890591 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/755be20c-e623-49b4-8c1b-97f651a664f7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qv6cz\" (UID: \"755be20c-e623-49b4-8c1b-97f651a664f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qv6cz" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890597 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/86dea262-c989-43a8-ae6e-e744012a5e07-tmpfs\") pod \"packageserver-d55dfcdfc-kcrth\" (UID: \"86dea262-c989-43a8-ae6e-e744012a5e07\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890606 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/622e7434-1ad5-41f3-9c60-bfafb7b6dd3a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6sjr4\" (UID: \"622e7434-1ad5-41f3-9c60-bfafb7b6dd3a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6sjr4" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890623 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5dfff538-11e7-4c6b-9db0-c26e2f6b6140-profile-collector-cert\") pod \"olm-operator-6b444d44fb-bjb69\" (UID: \"5dfff538-11e7-4c6b-9db0-c26e2f6b6140\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjb69" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890640 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vf2f\" (UniqueName: \"kubernetes.io/projected/82561e0e-8f14-4e88-adbb-b0a2b3d8760c-kube-api-access-5vf2f\") pod \"package-server-manager-789f6589d5-nc9qp\" (UID: \"82561e0e-8f14-4e88-adbb-b0a2b3d8760c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nc9qp" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890656 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7b49a935-c5ef-4290-a394-ff47774b9172-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hjhkn\" (UID: \"7b49a935-c5ef-4290-a394-ff47774b9172\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hjhkn" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890672 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c033428a-1e35-46a7-a589-d2374d629f46-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-85d5l\" (UID: \"c033428a-1e35-46a7-a589-d2374d629f46\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-85d5l" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890688 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f6039d5-8443-430a-9f72-26ffc3e3310c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ljplq\" (UID: \"2f6039d5-8443-430a-9f72-26ffc3e3310c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ljplq" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890703 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ba04cf12-8677-4024-9c2c-618dfc096d4d-profile-collector-cert\") pod \"catalog-operator-68c6474976-qln6b\" (UID: \"ba04cf12-8677-4024-9c2c-618dfc096d4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qln6b" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890718 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8766j\" (UniqueName: \"kubernetes.io/projected/4267d7ff-3907-40fe-ac79-e30e74e13476-kube-api-access-8766j\") pod \"ingress-operator-5b745b69d9-jhnpn\" (UID: \"4267d7ff-3907-40fe-ac79-e30e74e13476\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jhnpn" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.890737 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsqcj\" (UniqueName: \"kubernetes.io/projected/0c769d70-6c64-4e67-ad6a-cb99f70c31c0-kube-api-access-jsqcj\") pod \"service-ca-9c57cc56f-4m8ns\" (UID: \"0c769d70-6c64-4e67-ad6a-cb99f70c31c0\") " pod="openshift-service-ca/service-ca-9c57cc56f-4m8ns" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.891220 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7b49a935-c5ef-4290-a394-ff47774b9172-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hjhkn\" (UID: \"7b49a935-c5ef-4290-a394-ff47774b9172\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hjhkn" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.905677 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.925287 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.945431 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.964816 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.974144 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/755be20c-e623-49b4-8c1b-97f651a664f7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qv6cz\" (UID: \"755be20c-e623-49b4-8c1b-97f651a664f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qv6cz" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.985161 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 06:47:10 crc kubenswrapper[4520]: I0130 06:47:10.991589 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/755be20c-e623-49b4-8c1b-97f651a664f7-config\") pod \"kube-controller-manager-operator-78b949d7b-qv6cz\" (UID: \"755be20c-e623-49b4-8c1b-97f651a664f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qv6cz" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.004915 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.011536 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0267c2e-5b07-4578-bc73-2504b5300313-config\") pod \"apiserver-76f77b778f-hzv4j\" (UID: \"d0267c2e-5b07-4578-bc73-2504b5300313\") " pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.025511 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.030901 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7b49a935-c5ef-4290-a394-ff47774b9172-images\") pod \"machine-config-operator-74547568cd-hjhkn\" (UID: \"7b49a935-c5ef-4290-a394-ff47774b9172\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hjhkn" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.045145 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.065083 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.071904 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7b49a935-c5ef-4290-a394-ff47774b9172-proxy-tls\") pod \"machine-config-operator-74547568cd-hjhkn\" (UID: \"7b49a935-c5ef-4290-a394-ff47774b9172\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hjhkn" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.085473 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.104939 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.125230 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.145589 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.152833 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/622e7434-1ad5-41f3-9c60-bfafb7b6dd3a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6sjr4\" (UID: \"622e7434-1ad5-41f3-9c60-bfafb7b6dd3a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6sjr4" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.165007 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.171294 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/622e7434-1ad5-41f3-9c60-bfafb7b6dd3a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6sjr4\" (UID: \"622e7434-1ad5-41f3-9c60-bfafb7b6dd3a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6sjr4" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.184966 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.204838 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.225343 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.233668 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f6039d5-8443-430a-9f72-26ffc3e3310c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ljplq\" (UID: \"2f6039d5-8443-430a-9f72-26ffc3e3310c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ljplq" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.245893 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.251497 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f6039d5-8443-430a-9f72-26ffc3e3310c-config\") pod \"kube-apiserver-operator-766d6c64bb-ljplq\" (UID: \"2f6039d5-8443-430a-9f72-26ffc3e3310c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ljplq" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.266073 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.285965 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.305857 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.325264 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.333702 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03f68811-ba27-419e-afa9-1640c681b1fc-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wvf85\" (UID: \"03f68811-ba27-419e-afa9-1640c681b1fc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wvf85" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.345935 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.365701 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.371389 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03f68811-ba27-419e-afa9-1640c681b1fc-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wvf85\" (UID: \"03f68811-ba27-419e-afa9-1640c681b1fc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wvf85" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.385659 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.405310 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.425125 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.432124 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4267d7ff-3907-40fe-ac79-e30e74e13476-metrics-tls\") pod \"ingress-operator-5b745b69d9-jhnpn\" (UID: \"4267d7ff-3907-40fe-ac79-e30e74e13476\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jhnpn" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.445459 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.471324 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.481611 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4267d7ff-3907-40fe-ac79-e30e74e13476-trusted-ca\") pod \"ingress-operator-5b745b69d9-jhnpn\" (UID: \"4267d7ff-3907-40fe-ac79-e30e74e13476\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jhnpn" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.486093 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.491464 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2f78b20-5b64-4fb1-8b47-9053654b33a5-config\") pod \"service-ca-operator-777779d784-fbccj\" (UID: \"a2f78b20-5b64-4fb1-8b47-9053654b33a5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fbccj" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.505916 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.525416 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.533355 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2f78b20-5b64-4fb1-8b47-9053654b33a5-serving-cert\") pod \"service-ca-operator-777779d784-fbccj\" (UID: \"a2f78b20-5b64-4fb1-8b47-9053654b33a5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fbccj" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.545586 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.564881 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.585296 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.605711 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.625125 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.645155 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.664851 4520 request.go:700] Waited for 1.019678452s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-admission-controller-secret&limit=500&resourceVersion=0 Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.665687 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.672701 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c033428a-1e35-46a7-a589-d2374d629f46-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-85d5l\" (UID: \"c033428a-1e35-46a7-a589-d2374d629f46\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-85d5l" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.684894 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.705603 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.725614 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.732607 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ba04cf12-8677-4024-9c2c-618dfc096d4d-srv-cert\") pod \"catalog-operator-68c6474976-qln6b\" (UID: \"ba04cf12-8677-4024-9c2c-618dfc096d4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qln6b" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.745074 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.753148 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/350b6a45-2c99-453a-9e85-e97a1adc863d-secret-volume\") pod \"collect-profiles-29495925-q62ms\" (UID: \"350b6a45-2c99-453a-9e85-e97a1adc863d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495925-q62ms" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.753607 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5dfff538-11e7-4c6b-9db0-c26e2f6b6140-profile-collector-cert\") pod \"olm-operator-6b444d44fb-bjb69\" (UID: \"5dfff538-11e7-4c6b-9db0-c26e2f6b6140\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjb69" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.753623 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ba04cf12-8677-4024-9c2c-618dfc096d4d-profile-collector-cert\") pod \"catalog-operator-68c6474976-qln6b\" (UID: \"ba04cf12-8677-4024-9c2c-618dfc096d4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qln6b" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.765177 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.785901 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.792740 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/82561e0e-8f14-4e88-adbb-b0a2b3d8760c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nc9qp\" (UID: \"82561e0e-8f14-4e88-adbb-b0a2b3d8760c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nc9qp" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.806049 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.825817 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.834264 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbaed70c-7770-412b-b469-4e5bedbb7df7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qgkcs\" (UID: \"dbaed70c-7770-412b-b469-4e5bedbb7df7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qgkcs" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.845979 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.865345 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.872225 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7d200a37-0276-4e2c-b7ef-98107be3f313-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fd76j\" (UID: \"7d200a37-0276-4e2c-b7ef-98107be3f313\") " pod="openshift-marketplace/marketplace-operator-79b997595-fd76j" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.885194 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 06:47:11 crc kubenswrapper[4520]: E0130 06:47:11.889974 4520 secret.go:188] Couldn't get secret openshift-ingress/router-certs-default: failed to sync secret cache: timed out waiting for the condition Jan 30 06:47:11 crc kubenswrapper[4520]: E0130 06:47:11.890065 4520 secret.go:188] Couldn't get secret openshift-ingress/router-metrics-certs-default: failed to sync secret cache: timed out waiting for the condition Jan 30 06:47:11 crc kubenswrapper[4520]: E0130 06:47:11.889997 4520 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Jan 30 06:47:11 crc kubenswrapper[4520]: E0130 06:47:11.890041 4520 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Jan 30 06:47:11 crc kubenswrapper[4520]: E0130 06:47:11.890197 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7229bd1-5891-4654-ad14-c0efed77e9b7-default-certificate podName:a7229bd1-5891-4654-ad14-c0efed77e9b7 nodeName:}" failed. No retries permitted until 2026-01-30 06:47:12.390074329 +0000 UTC m=+146.018426500 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-certificate" (UniqueName: "kubernetes.io/secret/a7229bd1-5891-4654-ad14-c0efed77e9b7-default-certificate") pod "router-default-5444994796-z67kf" (UID: "a7229bd1-5891-4654-ad14-c0efed77e9b7") : failed to sync secret cache: timed out waiting for the condition Jan 30 06:47:11 crc kubenswrapper[4520]: E0130 06:47:11.890267 4520 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 30 06:47:11 crc kubenswrapper[4520]: E0130 06:47:11.890287 4520 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Jan 30 06:47:11 crc kubenswrapper[4520]: E0130 06:47:11.890307 4520 secret.go:188] Couldn't get secret openshift-ingress/router-stats-default: failed to sync secret cache: timed out waiting for the condition Jan 30 06:47:11 crc kubenswrapper[4520]: E0130 06:47:11.890331 4520 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Jan 30 06:47:11 crc kubenswrapper[4520]: E0130 06:47:11.890354 4520 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Jan 30 06:47:11 crc kubenswrapper[4520]: E0130 06:47:11.890273 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7229bd1-5891-4654-ad14-c0efed77e9b7-metrics-certs podName:a7229bd1-5891-4654-ad14-c0efed77e9b7 nodeName:}" failed. No retries permitted until 2026-01-30 06:47:12.390264637 +0000 UTC m=+146.018616819 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a7229bd1-5891-4654-ad14-c0efed77e9b7-metrics-certs") pod "router-default-5444994796-z67kf" (UID: "a7229bd1-5891-4654-ad14-c0efed77e9b7") : failed to sync secret cache: timed out waiting for the condition Jan 30 06:47:11 crc kubenswrapper[4520]: E0130 06:47:11.890480 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5c4cb732-fc3d-4607-8051-d1ac81d4b9ad-config-volume podName:5c4cb732-fc3d-4607-8051-d1ac81d4b9ad nodeName:}" failed. No retries permitted until 2026-01-30 06:47:12.390471076 +0000 UTC m=+146.018823257 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5c4cb732-fc3d-4607-8051-d1ac81d4b9ad-config-volume") pod "dns-default-bd6fq" (UID: "5c4cb732-fc3d-4607-8051-d1ac81d4b9ad") : failed to sync configmap cache: timed out waiting for the condition Jan 30 06:47:11 crc kubenswrapper[4520]: E0130 06:47:11.890556 4520 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 30 06:47:11 crc kubenswrapper[4520]: E0130 06:47:11.890569 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0c769d70-6c64-4e67-ad6a-cb99f70c31c0-signing-cabundle podName:0c769d70-6c64-4e67-ad6a-cb99f70c31c0 nodeName:}" failed. No retries permitted until 2026-01-30 06:47:12.390561056 +0000 UTC m=+146.018913237 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/0c769d70-6c64-4e67-ad6a-cb99f70c31c0-signing-cabundle") pod "service-ca-9c57cc56f-4m8ns" (UID: "0c769d70-6c64-4e67-ad6a-cb99f70c31c0") : failed to sync configmap cache: timed out waiting for the condition Jan 30 06:47:11 crc kubenswrapper[4520]: E0130 06:47:11.890651 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5dfff538-11e7-4c6b-9db0-c26e2f6b6140-srv-cert podName:5dfff538-11e7-4c6b-9db0-c26e2f6b6140 nodeName:}" failed. No retries permitted until 2026-01-30 06:47:12.390634023 +0000 UTC m=+146.018986214 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/5dfff538-11e7-4c6b-9db0-c26e2f6b6140-srv-cert") pod "olm-operator-6b444d44fb-bjb69" (UID: "5dfff538-11e7-4c6b-9db0-c26e2f6b6140") : failed to sync secret cache: timed out waiting for the condition Jan 30 06:47:11 crc kubenswrapper[4520]: E0130 06:47:11.890499 4520 configmap.go:193] Couldn't get configMap openshift-ingress/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 30 06:47:11 crc kubenswrapper[4520]: E0130 06:47:11.890706 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a7229bd1-5891-4654-ad14-c0efed77e9b7-service-ca-bundle podName:a7229bd1-5891-4654-ad14-c0efed77e9b7 nodeName:}" failed. No retries permitted until 2026-01-30 06:47:12.390696891 +0000 UTC m=+146.019049082 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/a7229bd1-5891-4654-ad14-c0efed77e9b7-service-ca-bundle") pod "router-default-5444994796-z67kf" (UID: "a7229bd1-5891-4654-ad14-c0efed77e9b7") : failed to sync configmap cache: timed out waiting for the condition Jan 30 06:47:11 crc kubenswrapper[4520]: E0130 06:47:11.890736 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e209fbc5-b75f-4fe7-829b-351ce502929e-certs podName:e209fbc5-b75f-4fe7-829b-351ce502929e nodeName:}" failed. No retries permitted until 2026-01-30 06:47:12.390728321 +0000 UTC m=+146.019080512 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/e209fbc5-b75f-4fe7-829b-351ce502929e-certs") pod "machine-config-server-4526b" (UID: "e209fbc5-b75f-4fe7-829b-351ce502929e") : failed to sync secret cache: timed out waiting for the condition Jan 30 06:47:11 crc kubenswrapper[4520]: E0130 06:47:11.890750 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7229bd1-5891-4654-ad14-c0efed77e9b7-stats-auth podName:a7229bd1-5891-4654-ad14-c0efed77e9b7 nodeName:}" failed. No retries permitted until 2026-01-30 06:47:12.390743639 +0000 UTC m=+146.019095830 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "stats-auth" (UniqueName: "kubernetes.io/secret/a7229bd1-5891-4654-ad14-c0efed77e9b7-stats-auth") pod "router-default-5444994796-z67kf" (UID: "a7229bd1-5891-4654-ad14-c0efed77e9b7") : failed to sync secret cache: timed out waiting for the condition Jan 30 06:47:11 crc kubenswrapper[4520]: E0130 06:47:11.890764 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/350b6a45-2c99-453a-9e85-e97a1adc863d-config-volume podName:350b6a45-2c99-453a-9e85-e97a1adc863d nodeName:}" failed. No retries permitted until 2026-01-30 06:47:12.390757796 +0000 UTC m=+146.019109987 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/350b6a45-2c99-453a-9e85-e97a1adc863d-config-volume") pod "collect-profiles-29495925-q62ms" (UID: "350b6a45-2c99-453a-9e85-e97a1adc863d") : failed to sync configmap cache: timed out waiting for the condition Jan 30 06:47:11 crc kubenswrapper[4520]: E0130 06:47:11.890799 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e209fbc5-b75f-4fe7-829b-351ce502929e-node-bootstrap-token podName:e209fbc5-b75f-4fe7-829b-351ce502929e nodeName:}" failed. No retries permitted until 2026-01-30 06:47:12.390792661 +0000 UTC m=+146.019144852 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/e209fbc5-b75f-4fe7-829b-351ce502929e-node-bootstrap-token") pod "machine-config-server-4526b" (UID: "e209fbc5-b75f-4fe7-829b-351ce502929e") : failed to sync secret cache: timed out waiting for the condition Jan 30 06:47:11 crc kubenswrapper[4520]: E0130 06:47:11.890827 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86dea262-c989-43a8-ae6e-e744012a5e07-webhook-cert podName:86dea262-c989-43a8-ae6e-e744012a5e07 nodeName:}" failed. No retries permitted until 2026-01-30 06:47:12.390819663 +0000 UTC m=+146.019171854 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/86dea262-c989-43a8-ae6e-e744012a5e07-webhook-cert") pod "packageserver-d55dfcdfc-kcrth" (UID: "86dea262-c989-43a8-ae6e-e744012a5e07") : failed to sync secret cache: timed out waiting for the condition Jan 30 06:47:11 crc kubenswrapper[4520]: E0130 06:47:11.890879 4520 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Jan 30 06:47:11 crc kubenswrapper[4520]: E0130 06:47:11.890908 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5c4cb732-fc3d-4607-8051-d1ac81d4b9ad-metrics-tls podName:5c4cb732-fc3d-4607-8051-d1ac81d4b9ad nodeName:}" failed. No retries permitted until 2026-01-30 06:47:12.390899191 +0000 UTC m=+146.019251383 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/5c4cb732-fc3d-4607-8051-d1ac81d4b9ad-metrics-tls") pod "dns-default-bd6fq" (UID: "5c4cb732-fc3d-4607-8051-d1ac81d4b9ad") : failed to sync secret cache: timed out waiting for the condition Jan 30 06:47:11 crc kubenswrapper[4520]: E0130 06:47:11.890925 4520 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Jan 30 06:47:11 crc kubenswrapper[4520]: E0130 06:47:11.890946 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c769d70-6c64-4e67-ad6a-cb99f70c31c0-signing-key podName:0c769d70-6c64-4e67-ad6a-cb99f70c31c0 nodeName:}" failed. No retries permitted until 2026-01-30 06:47:12.390940058 +0000 UTC m=+146.019292249 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/0c769d70-6c64-4e67-ad6a-cb99f70c31c0-signing-key") pod "service-ca-9c57cc56f-4m8ns" (UID: "0c769d70-6c64-4e67-ad6a-cb99f70c31c0") : failed to sync secret cache: timed out waiting for the condition Jan 30 06:47:11 crc kubenswrapper[4520]: E0130 06:47:11.890972 4520 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Jan 30 06:47:11 crc kubenswrapper[4520]: E0130 06:47:11.890995 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d200a37-0276-4e2c-b7ef-98107be3f313-marketplace-trusted-ca podName:7d200a37-0276-4e2c-b7ef-98107be3f313 nodeName:}" failed. No retries permitted until 2026-01-30 06:47:12.390989682 +0000 UTC m=+146.019341873 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/7d200a37-0276-4e2c-b7ef-98107be3f313-marketplace-trusted-ca") pod "marketplace-operator-79b997595-fd76j" (UID: "7d200a37-0276-4e2c-b7ef-98107be3f313") : failed to sync configmap cache: timed out waiting for the condition Jan 30 06:47:11 crc kubenswrapper[4520]: E0130 06:47:11.891083 4520 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 30 06:47:11 crc kubenswrapper[4520]: E0130 06:47:11.891163 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86dea262-c989-43a8-ae6e-e744012a5e07-apiservice-cert podName:86dea262-c989-43a8-ae6e-e744012a5e07 nodeName:}" failed. No retries permitted until 2026-01-30 06:47:12.391156727 +0000 UTC m=+146.019508907 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/86dea262-c989-43a8-ae6e-e744012a5e07-apiservice-cert") pod "packageserver-d55dfcdfc-kcrth" (UID: "86dea262-c989-43a8-ae6e-e744012a5e07") : failed to sync secret cache: timed out waiting for the condition Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.904742 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.925314 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.948991 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.965076 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 06:47:11 crc kubenswrapper[4520]: I0130 06:47:11.985551 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.005578 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.025223 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.046029 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.065751 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.085276 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.105189 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.125181 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.145307 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.164831 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.184814 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.205434 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.225244 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.245267 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.276137 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6nc4\" (UniqueName: \"kubernetes.io/projected/d63d73a7-c813-4983-bccf-805604f7d593-kube-api-access-d6nc4\") pod \"route-controller-manager-6576b87f9c-pqjqj\" (UID: \"d63d73a7-c813-4983-bccf-805604f7d593\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqjqj" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.295740 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs4tc\" (UniqueName: \"kubernetes.io/projected/dd235a24-175b-4983-980e-2630b3c5b39f-kube-api-access-cs4tc\") pod \"controller-manager-879f6c89f-8jk9c\" (UID: \"dd235a24-175b-4983-980e-2630b3c5b39f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jk9c" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.300526 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqjqj" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.313691 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8jk9c" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.317359 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcbpf\" (UniqueName: \"kubernetes.io/projected/4a3be9f1-bd40-4667-bdd7-2cf23292fab5-kube-api-access-dcbpf\") pod \"openshift-config-operator-7777fb866f-rn9s4\" (UID: \"4a3be9f1-bd40-4667-bdd7-2cf23292fab5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.336848 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwpmd\" (UniqueName: \"kubernetes.io/projected/4f4d90ef-dfaa-4a6b-8e9f-dc4e4039da47-kube-api-access-nwpmd\") pod \"openshift-controller-manager-operator-756b6f6bc6-rck29\" (UID: \"4f4d90ef-dfaa-4a6b-8e9f-dc4e4039da47\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rck29" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.345440 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rck29" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.359483 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgvmp\" (UniqueName: \"kubernetes.io/projected/265d9231-d5db-4cdb-80b8-dfd95dffa386-kube-api-access-bgvmp\") pod \"oauth-openshift-558db77b4-782cc\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.382009 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7pt2\" (UniqueName: \"kubernetes.io/projected/22d49062-540d-414e-b0c6-2c20d411fa71-kube-api-access-j7pt2\") pod \"authentication-operator-69f744f599-dqjws\" (UID: \"22d49062-540d-414e-b0c6-2c20d411fa71\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dqjws" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.398011 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76zgw\" (UniqueName: \"kubernetes.io/projected/b8ab10e4-5a02-445b-8788-1ed64c22c9e3-kube-api-access-76zgw\") pod \"dns-operator-744455d44c-ll7nf\" (UID: \"b8ab10e4-5a02-445b-8788-1ed64c22c9e3\") " pod="openshift-dns-operator/dns-operator-744455d44c-ll7nf" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.410326 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e209fbc5-b75f-4fe7-829b-351ce502929e-certs\") pod \"machine-config-server-4526b\" (UID: \"e209fbc5-b75f-4fe7-829b-351ce502929e\") " pod="openshift-machine-config-operator/machine-config-server-4526b" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.410356 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7229bd1-5891-4654-ad14-c0efed77e9b7-metrics-certs\") pod \"router-default-5444994796-z67kf\" (UID: \"a7229bd1-5891-4654-ad14-c0efed77e9b7\") " pod="openshift-ingress/router-default-5444994796-z67kf" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.410387 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5dfff538-11e7-4c6b-9db0-c26e2f6b6140-srv-cert\") pod \"olm-operator-6b444d44fb-bjb69\" (UID: \"5dfff538-11e7-4c6b-9db0-c26e2f6b6140\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjb69" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.410403 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a7229bd1-5891-4654-ad14-c0efed77e9b7-stats-auth\") pod \"router-default-5444994796-z67kf\" (UID: \"a7229bd1-5891-4654-ad14-c0efed77e9b7\") " pod="openshift-ingress/router-default-5444994796-z67kf" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.410462 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/350b6a45-2c99-453a-9e85-e97a1adc863d-config-volume\") pod \"collect-profiles-29495925-q62ms\" (UID: \"350b6a45-2c99-453a-9e85-e97a1adc863d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495925-q62ms" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.410496 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e209fbc5-b75f-4fe7-829b-351ce502929e-node-bootstrap-token\") pod \"machine-config-server-4526b\" (UID: \"e209fbc5-b75f-4fe7-829b-351ce502929e\") " pod="openshift-machine-config-operator/machine-config-server-4526b" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.410563 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7229bd1-5891-4654-ad14-c0efed77e9b7-service-ca-bundle\") pod \"router-default-5444994796-z67kf\" (UID: \"a7229bd1-5891-4654-ad14-c0efed77e9b7\") " pod="openshift-ingress/router-default-5444994796-z67kf" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.410584 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/86dea262-c989-43a8-ae6e-e744012a5e07-webhook-cert\") pod \"packageserver-d55dfcdfc-kcrth\" (UID: \"86dea262-c989-43a8-ae6e-e744012a5e07\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.410643 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d200a37-0276-4e2c-b7ef-98107be3f313-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fd76j\" (UID: \"7d200a37-0276-4e2c-b7ef-98107be3f313\") " pod="openshift-marketplace/marketplace-operator-79b997595-fd76j" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.410661 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5c4cb732-fc3d-4607-8051-d1ac81d4b9ad-metrics-tls\") pod \"dns-default-bd6fq\" (UID: \"5c4cb732-fc3d-4607-8051-d1ac81d4b9ad\") " pod="openshift-dns/dns-default-bd6fq" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.410691 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0c769d70-6c64-4e67-ad6a-cb99f70c31c0-signing-key\") pod \"service-ca-9c57cc56f-4m8ns\" (UID: \"0c769d70-6c64-4e67-ad6a-cb99f70c31c0\") " pod="openshift-service-ca/service-ca-9c57cc56f-4m8ns" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.410728 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/86dea262-c989-43a8-ae6e-e744012a5e07-apiservice-cert\") pod \"packageserver-d55dfcdfc-kcrth\" (UID: \"86dea262-c989-43a8-ae6e-e744012a5e07\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.410793 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a7229bd1-5891-4654-ad14-c0efed77e9b7-default-certificate\") pod \"router-default-5444994796-z67kf\" (UID: \"a7229bd1-5891-4654-ad14-c0efed77e9b7\") " pod="openshift-ingress/router-default-5444994796-z67kf" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.410830 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c4cb732-fc3d-4607-8051-d1ac81d4b9ad-config-volume\") pod \"dns-default-bd6fq\" (UID: \"5c4cb732-fc3d-4607-8051-d1ac81d4b9ad\") " pod="openshift-dns/dns-default-bd6fq" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.410850 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0c769d70-6c64-4e67-ad6a-cb99f70c31c0-signing-cabundle\") pod \"service-ca-9c57cc56f-4m8ns\" (UID: \"0c769d70-6c64-4e67-ad6a-cb99f70c31c0\") " pod="openshift-service-ca/service-ca-9c57cc56f-4m8ns" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.411563 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7229bd1-5891-4654-ad14-c0efed77e9b7-service-ca-bundle\") pod \"router-default-5444994796-z67kf\" (UID: \"a7229bd1-5891-4654-ad14-c0efed77e9b7\") " pod="openshift-ingress/router-default-5444994796-z67kf" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.412361 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d200a37-0276-4e2c-b7ef-98107be3f313-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fd76j\" (UID: \"7d200a37-0276-4e2c-b7ef-98107be3f313\") " pod="openshift-marketplace/marketplace-operator-79b997595-fd76j" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.412480 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/350b6a45-2c99-453a-9e85-e97a1adc863d-config-volume\") pod \"collect-profiles-29495925-q62ms\" (UID: \"350b6a45-2c99-453a-9e85-e97a1adc863d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495925-q62ms" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.413841 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a7229bd1-5891-4654-ad14-c0efed77e9b7-stats-auth\") pod \"router-default-5444994796-z67kf\" (UID: \"a7229bd1-5891-4654-ad14-c0efed77e9b7\") " pod="openshift-ingress/router-default-5444994796-z67kf" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.411572 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0c769d70-6c64-4e67-ad6a-cb99f70c31c0-signing-cabundle\") pod \"service-ca-9c57cc56f-4m8ns\" (UID: \"0c769d70-6c64-4e67-ad6a-cb99f70c31c0\") " pod="openshift-service-ca/service-ca-9c57cc56f-4m8ns" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.415988 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a7229bd1-5891-4654-ad14-c0efed77e9b7-default-certificate\") pod \"router-default-5444994796-z67kf\" (UID: \"a7229bd1-5891-4654-ad14-c0efed77e9b7\") " pod="openshift-ingress/router-default-5444994796-z67kf" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.416400 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/86dea262-c989-43a8-ae6e-e744012a5e07-apiservice-cert\") pod \"packageserver-d55dfcdfc-kcrth\" (UID: \"86dea262-c989-43a8-ae6e-e744012a5e07\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.417981 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7229bd1-5891-4654-ad14-c0efed77e9b7-metrics-certs\") pod \"router-default-5444994796-z67kf\" (UID: \"a7229bd1-5891-4654-ad14-c0efed77e9b7\") " pod="openshift-ingress/router-default-5444994796-z67kf" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.418228 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5dfff538-11e7-4c6b-9db0-c26e2f6b6140-srv-cert\") pod \"olm-operator-6b444d44fb-bjb69\" (UID: \"5dfff538-11e7-4c6b-9db0-c26e2f6b6140\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjb69" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.418480 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0c769d70-6c64-4e67-ad6a-cb99f70c31c0-signing-key\") pod \"service-ca-9c57cc56f-4m8ns\" (UID: \"0c769d70-6c64-4e67-ad6a-cb99f70c31c0\") " pod="openshift-service-ca/service-ca-9c57cc56f-4m8ns" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.418903 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/86dea262-c989-43a8-ae6e-e744012a5e07-webhook-cert\") pod \"packageserver-d55dfcdfc-kcrth\" (UID: \"86dea262-c989-43a8-ae6e-e744012a5e07\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.419478 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksvln\" (UniqueName: \"kubernetes.io/projected/23b08d0a-4aa5-43be-a498-55e54d6e8c31-kube-api-access-ksvln\") pod \"console-operator-58897d9998-w7xl2\" (UID: \"23b08d0a-4aa5-43be-a498-55e54d6e8c31\") " pod="openshift-console-operator/console-operator-58897d9998-w7xl2" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.437176 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjzk2\" (UniqueName: \"kubernetes.io/projected/f97d3be8-69cc-4005-aa61-9ff3f6c72287-kube-api-access-kjzk2\") pod \"machine-approver-56656f9798-6n75g\" (UID: \"f97d3be8-69cc-4005-aa61-9ff3f6c72287\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6n75g" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.458685 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65vrn\" (UniqueName: \"kubernetes.io/projected/f56326ab-bf4f-43c5-8762-85cb71c93f0a-kube-api-access-65vrn\") pod \"downloads-7954f5f757-lflpb\" (UID: \"f56326ab-bf4f-43c5-8762-85cb71c93f0a\") " pod="openshift-console/downloads-7954f5f757-lflpb" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.465931 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.486111 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rck29"] Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.486343 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.495101 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5c4cb732-fc3d-4607-8051-d1ac81d4b9ad-metrics-tls\") pod \"dns-default-bd6fq\" (UID: \"5c4cb732-fc3d-4607-8051-d1ac81d4b9ad\") " pod="openshift-dns/dns-default-bd6fq" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.505512 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.506728 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c4cb732-fc3d-4607-8051-d1ac81d4b9ad-config-volume\") pod \"dns-default-bd6fq\" (UID: \"5c4cb732-fc3d-4607-8051-d1ac81d4b9ad\") " pod="openshift-dns/dns-default-bd6fq" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.511819 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:12 crc kubenswrapper[4520]: E0130 06:47:12.512012 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:49:14.511993928 +0000 UTC m=+268.140346109 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.525657 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.531191 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e209fbc5-b75f-4fe7-829b-351ce502929e-certs\") pod \"machine-config-server-4526b\" (UID: \"e209fbc5-b75f-4fe7-829b-351ce502929e\") " pod="openshift-machine-config-operator/machine-config-server-4526b" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.545692 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.564990 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.574681 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e209fbc5-b75f-4fe7-829b-351ce502929e-node-bootstrap-token\") pod \"machine-config-server-4526b\" (UID: \"e209fbc5-b75f-4fe7-829b-351ce502929e\") " pod="openshift-machine-config-operator/machine-config-server-4526b" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.577348 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.590735 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6n75g" Jan 30 06:47:12 crc kubenswrapper[4520]: W0130 06:47:12.600449 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf97d3be8_69cc_4005_aa61_9ff3f6c72287.slice/crio-d1b4de5f93c73e8264928c736dd0f13a5935d01e2346b574680506d111f9d8ce WatchSource:0}: Error finding container d1b4de5f93c73e8264928c736dd0f13a5935d01e2346b574680506d111f9d8ce: Status 404 returned error can't find the container with id d1b4de5f93c73e8264928c736dd0f13a5935d01e2346b574680506d111f9d8ce Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.605754 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.605842 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.613359 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.613437 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.613458 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.614641 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.615368 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.616983 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.617947 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.618353 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.623922 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-w7xl2" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.625017 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.629495 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-dqjws" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.639270 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-ll7nf" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.646106 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.655896 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-lflpb" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.662902 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqjqj"] Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.664700 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8jk9c"] Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.666160 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.685146 4520 request.go:700] Waited for 1.927511384s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.687559 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.705031 4520 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.725353 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.760874 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj2dg\" (UniqueName: \"kubernetes.io/projected/4d23e44d-fbe6-40d1-8d6e-bf19cc751be8-kube-api-access-vj2dg\") pod \"cluster-image-registry-operator-dc59b4c8b-bdjcm\" (UID: \"4d23e44d-fbe6-40d1-8d6e-bf19cc751be8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bdjcm" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.778972 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llwpp\" (UniqueName: \"kubernetes.io/projected/63191221-7520-4517-aeed-6d3896c2cad1-kube-api-access-llwpp\") pod \"apiserver-7bbb656c7d-2vpl2\" (UID: \"63191221-7520-4517-aeed-6d3896c2cad1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2vpl2" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.798093 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-782cc"] Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.813074 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf96x\" (UniqueName: \"kubernetes.io/projected/2b9d0f20-53d1-4142-b961-55d553553aed-kube-api-access-kf96x\") pod \"machine-api-operator-5694c8668f-s6bks\" (UID: \"2b9d0f20-53d1-4142-b961-55d553553aed\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s6bks" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.818909 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw6dm\" (UniqueName: \"kubernetes.io/projected/d0267c2e-5b07-4578-bc73-2504b5300313-kube-api-access-hw6dm\") pod \"apiserver-76f77b778f-hzv4j\" (UID: \"d0267c2e-5b07-4578-bc73-2504b5300313\") " pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:12 crc kubenswrapper[4520]: W0130 06:47:12.833186 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod265d9231_d5db_4cdb_80b8_dfd95dffa386.slice/crio-4e9a2bb94e50ca225544494bb59d454e40935ef2c74911ce39f15e05276e4fcc WatchSource:0}: Error finding container 4e9a2bb94e50ca225544494bb59d454e40935ef2c74911ce39f15e05276e4fcc: Status 404 returned error can't find the container with id 4e9a2bb94e50ca225544494bb59d454e40935ef2c74911ce39f15e05276e4fcc Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.845065 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mwtz\" (UniqueName: \"kubernetes.io/projected/a7374ef9-1396-4293-b711-fb07eaa512d0-kube-api-access-5mwtz\") pod \"etcd-operator-b45778765-w62kb\" (UID: \"a7374ef9-1396-4293-b711-fb07eaa512d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w62kb" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.858842 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p8fg\" (UniqueName: \"kubernetes.io/projected/b0dc81d4-052e-46df-a17e-4461ccf8a64d-kube-api-access-9p8fg\") pod \"cluster-samples-operator-665b6dd947-sks8c\" (UID: \"b0dc81d4-052e-46df-a17e-4461ccf8a64d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-sks8c" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.877323 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d23e44d-fbe6-40d1-8d6e-bf19cc751be8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-bdjcm\" (UID: \"4d23e44d-fbe6-40d1-8d6e-bf19cc751be8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bdjcm" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.898484 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.901304 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.902196 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vddvg\" (UniqueName: \"kubernetes.io/projected/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-kube-api-access-vddvg\") pod \"console-f9d7485db-nkbdc\" (UID: \"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b\") " pod="openshift-console/console-f9d7485db-nkbdc" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.906251 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.920170 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hwqj\" (UniqueName: \"kubernetes.io/projected/b1b628dc-8ac5-4463-bcdd-b573fa6c1e80-kube-api-access-5hwqj\") pod \"openshift-apiserver-operator-796bbdcf4f-bhzlz\" (UID: \"b1b628dc-8ac5-4463-bcdd-b573fa6c1e80\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bhzlz" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.957719 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccqqt\" (UniqueName: \"kubernetes.io/projected/7d200a37-0276-4e2c-b7ef-98107be3f313-kube-api-access-ccqqt\") pod \"marketplace-operator-79b997595-fd76j\" (UID: \"7d200a37-0276-4e2c-b7ef-98107be3f313\") " pod="openshift-marketplace/marketplace-operator-79b997595-fd76j" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.964650 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-sks8c" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.969706 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-s6bks" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.977643 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2vpl2" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.978400 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8htt\" (UniqueName: \"kubernetes.io/projected/5dfff538-11e7-4c6b-9db0-c26e2f6b6140-kube-api-access-p8htt\") pod \"olm-operator-6b444d44fb-bjb69\" (UID: \"5dfff538-11e7-4c6b-9db0-c26e2f6b6140\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjb69" Jan 30 06:47:12 crc kubenswrapper[4520]: I0130 06:47:12.983991 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bdjcm" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:12.999998 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgnhr\" (UniqueName: \"kubernetes.io/projected/350b6a45-2c99-453a-9e85-e97a1adc863d-kube-api-access-xgnhr\") pod \"collect-profiles-29495925-q62ms\" (UID: \"350b6a45-2c99-453a-9e85-e97a1adc863d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495925-q62ms" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.007033 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-w62kb" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.008652 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ll7nf"] Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.024131 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmfsg\" (UniqueName: \"kubernetes.io/projected/622e7434-1ad5-41f3-9c60-bfafb7b6dd3a-kube-api-access-bmfsg\") pod \"kube-storage-version-migrator-operator-b67b599dd-6sjr4\" (UID: \"622e7434-1ad5-41f3-9c60-bfafb7b6dd3a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6sjr4" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.026769 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bhzlz" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.035560 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-nkbdc" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.042665 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4"] Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.047613 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbjpk\" (UniqueName: \"kubernetes.io/projected/dbaed70c-7770-412b-b469-4e5bedbb7df7-kube-api-access-rbjpk\") pod \"control-plane-machine-set-operator-78cbb6b69f-qgkcs\" (UID: \"dbaed70c-7770-412b-b469-4e5bedbb7df7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qgkcs" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.062340 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/755be20c-e623-49b4-8c1b-97f651a664f7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qv6cz\" (UID: \"755be20c-e623-49b4-8c1b-97f651a664f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qv6cz" Jan 30 06:47:13 crc kubenswrapper[4520]: W0130 06:47:13.075796 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a3be9f1_bd40_4667_bdd7_2cf23292fab5.slice/crio-671921fecf05196b53ffa901db84125abb60d8315618c6e6768123dc5bbb3ff9 WatchSource:0}: Error finding container 671921fecf05196b53ffa901db84125abb60d8315618c6e6768123dc5bbb3ff9: Status 404 returned error can't find the container with id 671921fecf05196b53ffa901db84125abb60d8315618c6e6768123dc5bbb3ff9 Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.077449 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.091680 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54l84\" (UniqueName: \"kubernetes.io/projected/ba04cf12-8677-4024-9c2c-618dfc096d4d-kube-api-access-54l84\") pod \"catalog-operator-68c6474976-qln6b\" (UID: \"ba04cf12-8677-4024-9c2c-618dfc096d4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qln6b" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.106746 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f6039d5-8443-430a-9f72-26ffc3e3310c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ljplq\" (UID: \"2f6039d5-8443-430a-9f72-26ffc3e3310c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ljplq" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.114670 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-w7xl2"] Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.115800 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-dqjws"] Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.117669 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-lflpb"] Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.127503 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2r6p\" (UniqueName: \"kubernetes.io/projected/7b49a935-c5ef-4290-a394-ff47774b9172-kube-api-access-x2r6p\") pod \"machine-config-operator-74547568cd-hjhkn\" (UID: \"7b49a935-c5ef-4290-a394-ff47774b9172\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hjhkn" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.130768 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qv6cz" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.138728 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hjhkn" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.143184 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kphsk\" (UniqueName: \"kubernetes.io/projected/e209fbc5-b75f-4fe7-829b-351ce502929e-kube-api-access-kphsk\") pod \"machine-config-server-4526b\" (UID: \"e209fbc5-b75f-4fe7-829b-351ce502929e\") " pod="openshift-machine-config-operator/machine-config-server-4526b" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.145032 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ljplq" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.156299 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6sjr4" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.159646 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxwrq\" (UniqueName: \"kubernetes.io/projected/a2f78b20-5b64-4fb1-8b47-9053654b33a5-kube-api-access-wxwrq\") pod \"service-ca-operator-777779d784-fbccj\" (UID: \"a2f78b20-5b64-4fb1-8b47-9053654b33a5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fbccj" Jan 30 06:47:13 crc kubenswrapper[4520]: W0130 06:47:13.171943 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23b08d0a_4aa5_43be_a498_55e54d6e8c31.slice/crio-97ac443bc9da9b72e9874e4d38d199affcd87a1c91754a0c79139b7059f52ac8 WatchSource:0}: Error finding container 97ac443bc9da9b72e9874e4d38d199affcd87a1c91754a0c79139b7059f52ac8: Status 404 returned error can't find the container with id 97ac443bc9da9b72e9874e4d38d199affcd87a1c91754a0c79139b7059f52ac8 Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.173064 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fbccj" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.182221 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4267d7ff-3907-40fe-ac79-e30e74e13476-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jhnpn\" (UID: \"4267d7ff-3907-40fe-ac79-e30e74e13476\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jhnpn" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.190868 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ll7nf" event={"ID":"b8ab10e4-5a02-445b-8788-1ed64c22c9e3","Type":"ContainerStarted","Data":"29d9e2f82b211edcb199cbe80b337196d41a70ccabc61ea97dcf7f33645efa64"} Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.191575 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qln6b" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.193542 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6n75g" event={"ID":"f97d3be8-69cc-4005-aa61-9ff3f6c72287","Type":"ContainerStarted","Data":"46bf3f49894140a843cf232533a537f329bfe9fed2baeec62cc8577eddf5fa3c"} Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.193563 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6n75g" event={"ID":"f97d3be8-69cc-4005-aa61-9ff3f6c72287","Type":"ContainerStarted","Data":"d1b4de5f93c73e8264928c736dd0f13a5935d01e2346b574680506d111f9d8ce"} Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.199564 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqjqj" event={"ID":"d63d73a7-c813-4983-bccf-805604f7d593","Type":"ContainerStarted","Data":"8a6ab591496ffcc19fe10012aeb39bf277ecd6461fbc25e5a0f2ed8e5dfa055d"} Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.199597 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqjqj" event={"ID":"d63d73a7-c813-4983-bccf-805604f7d593","Type":"ContainerStarted","Data":"21552e408d7c3ebafd95db380175ddb5ed7f87a0b09e79d8a6dddee1e8745898"} Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.200242 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqjqj" Jan 30 06:47:13 crc kubenswrapper[4520]: W0130 06:47:13.200304 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-57613cac288b19bb4569f15c71dc5294320e8fb9bd38c29a2c26fc783cd4ccec WatchSource:0}: Error finding container 57613cac288b19bb4569f15c71dc5294320e8fb9bd38c29a2c26fc783cd4ccec: Status 404 returned error can't find the container with id 57613cac288b19bb4569f15c71dc5294320e8fb9bd38c29a2c26fc783cd4ccec Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.202438 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jnnc\" (UniqueName: \"kubernetes.io/projected/c033428a-1e35-46a7-a589-d2374d629f46-kube-api-access-4jnnc\") pod \"multus-admission-controller-857f4d67dd-85d5l\" (UID: \"c033428a-1e35-46a7-a589-d2374d629f46\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-85d5l" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.203971 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rck29" event={"ID":"4f4d90ef-dfaa-4a6b-8e9f-dc4e4039da47","Type":"ContainerStarted","Data":"ceefd71e7c947be99112569b7b028ff2421513502745627384baac8c90411d03"} Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.204013 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rck29" event={"ID":"4f4d90ef-dfaa-4a6b-8e9f-dc4e4039da47","Type":"ContainerStarted","Data":"744716a026f6e01435bcf2714c7da2410f1a8d9c49537f368a6c1981aa4904b7"} Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.207957 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qgkcs" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.216738 4520 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-pqjqj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.216938 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqjqj" podUID="d63d73a7-c813-4983-bccf-805604f7d593" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.217243 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fd76j" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.236388 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn29t\" (UniqueName: \"kubernetes.io/projected/5c4cb732-fc3d-4607-8051-d1ac81d4b9ad-kube-api-access-jn29t\") pod \"dns-default-bd6fq\" (UID: \"5c4cb732-fc3d-4607-8051-d1ac81d4b9ad\") " pod="openshift-dns/dns-default-bd6fq" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.237975 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjb69" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.242744 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495925-q62ms" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.245018 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8jk9c" event={"ID":"dd235a24-175b-4983-980e-2630b3c5b39f","Type":"ContainerStarted","Data":"509ae1a371e95feb26565995e46d7370183a7f57dd1c8b897ed0be107fc0f00a"} Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.245047 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8jk9c" event={"ID":"dd235a24-175b-4983-980e-2630b3c5b39f","Type":"ContainerStarted","Data":"03df071c64f6cdcf64ba51826a5cff0a863da13e6ccc617943adefc81c874ad5"} Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.245527 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-8jk9c" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.248910 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-bd6fq" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.252265 4520 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-8jk9c container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.252295 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-8jk9c" podUID="dd235a24-175b-4983-980e-2630b3c5b39f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.253487 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-w7xl2" event={"ID":"23b08d0a-4aa5-43be-a498-55e54d6e8c31","Type":"ContainerStarted","Data":"97ac443bc9da9b72e9874e4d38d199affcd87a1c91754a0c79139b7059f52ac8"} Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.254666 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-4526b" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.261208 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st5wm\" (UniqueName: \"kubernetes.io/projected/86dea262-c989-43a8-ae6e-e744012a5e07-kube-api-access-st5wm\") pod \"packageserver-d55dfcdfc-kcrth\" (UID: \"86dea262-c989-43a8-ae6e-e744012a5e07\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.274412 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03f68811-ba27-419e-afa9-1640c681b1fc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wvf85\" (UID: \"03f68811-ba27-419e-afa9-1640c681b1fc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wvf85" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.283943 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv7s4\" (UniqueName: \"kubernetes.io/projected/b3470d5b-3e9f-4d41-a992-77b47e35ac52-kube-api-access-bv7s4\") pod \"migrator-59844c95c7-8pt4x\" (UID: \"b3470d5b-3e9f-4d41-a992-77b47e35ac52\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8pt4x" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.291714 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" event={"ID":"4a3be9f1-bd40-4667-bdd7-2cf23292fab5","Type":"ContainerStarted","Data":"671921fecf05196b53ffa901db84125abb60d8315618c6e6768123dc5bbb3ff9"} Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.304114 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-782cc" event={"ID":"265d9231-d5db-4cdb-80b8-dfd95dffa386","Type":"ContainerStarted","Data":"7177b2e882009109fb97b6be4a37c50289504718c10d9d0722d9ebc363b675ce"} Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.304134 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-782cc" event={"ID":"265d9231-d5db-4cdb-80b8-dfd95dffa386","Type":"ContainerStarted","Data":"4e9a2bb94e50ca225544494bb59d454e40935ef2c74911ce39f15e05276e4fcc"} Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.305313 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.315632 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vf2f\" (UniqueName: \"kubernetes.io/projected/82561e0e-8f14-4e88-adbb-b0a2b3d8760c-kube-api-access-5vf2f\") pod \"package-server-manager-789f6589d5-nc9qp\" (UID: \"82561e0e-8f14-4e88-adbb-b0a2b3d8760c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nc9qp" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.316459 4520 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-782cc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.5:6443/healthz\": dial tcp 10.217.0.5:6443: connect: connection refused" start-of-body= Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.316487 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-782cc" podUID="265d9231-d5db-4cdb-80b8-dfd95dffa386" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.5:6443/healthz\": dial tcp 10.217.0.5:6443: connect: connection refused" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.322855 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsqcj\" (UniqueName: \"kubernetes.io/projected/0c769d70-6c64-4e67-ad6a-cb99f70c31c0-kube-api-access-jsqcj\") pod \"service-ca-9c57cc56f-4m8ns\" (UID: \"0c769d70-6c64-4e67-ad6a-cb99f70c31c0\") " pod="openshift-service-ca/service-ca-9c57cc56f-4m8ns" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.337701 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-sks8c"] Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.357352 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8766j\" (UniqueName: \"kubernetes.io/projected/4267d7ff-3907-40fe-ac79-e30e74e13476-kube-api-access-8766j\") pod \"ingress-operator-5b745b69d9-jhnpn\" (UID: \"4267d7ff-3907-40fe-ac79-e30e74e13476\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jhnpn" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.365947 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmjdp\" (UniqueName: \"kubernetes.io/projected/a7229bd1-5891-4654-ad14-c0efed77e9b7-kube-api-access-xmjdp\") pod \"router-default-5444994796-z67kf\" (UID: \"a7229bd1-5891-4654-ad14-c0efed77e9b7\") " pod="openshift-ingress/router-default-5444994796-z67kf" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.437578 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/28a7e740-6b3e-49a1-ac09-f802137f6a84-installation-pull-secrets\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.437638 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/28a7e740-6b3e-49a1-ac09-f802137f6a84-bound-sa-token\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.437727 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/28a7e740-6b3e-49a1-ac09-f802137f6a84-ca-trust-extracted\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.437806 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1bc42137-1969-4a7f-89d3-8ded4455ee64-proxy-tls\") pod \"machine-config-controller-84d6567774-4pxnp\" (UID: \"1bc42137-1969-4a7f-89d3-8ded4455ee64\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4pxnp" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.437906 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/28a7e740-6b3e-49a1-ac09-f802137f6a84-registry-tls\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.437970 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/28a7e740-6b3e-49a1-ac09-f802137f6a84-registry-certificates\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.438003 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rlsg\" (UniqueName: \"kubernetes.io/projected/28a7e740-6b3e-49a1-ac09-f802137f6a84-kube-api-access-9rlsg\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.438030 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1bc42137-1969-4a7f-89d3-8ded4455ee64-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4pxnp\" (UID: \"1bc42137-1969-4a7f-89d3-8ded4455ee64\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4pxnp" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.438065 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.438094 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrhgs\" (UniqueName: \"kubernetes.io/projected/1bc42137-1969-4a7f-89d3-8ded4455ee64-kube-api-access-nrhgs\") pod \"machine-config-controller-84d6567774-4pxnp\" (UID: \"1bc42137-1969-4a7f-89d3-8ded4455ee64\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4pxnp" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.438121 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/28a7e740-6b3e-49a1-ac09-f802137f6a84-trusted-ca\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:13 crc kubenswrapper[4520]: E0130 06:47:13.438388 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:13.938374928 +0000 UTC m=+147.566727109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.461862 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wvf85" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.467779 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jhnpn" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.481142 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8pt4x" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.484798 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-85d5l" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.501726 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nc9qp" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.525398 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-4m8ns" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.525629 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.529696 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-z67kf" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.538759 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:13 crc kubenswrapper[4520]: E0130 06:47:13.540888 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:14.040868069 +0000 UTC m=+147.669220249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.541017 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/28a7e740-6b3e-49a1-ac09-f802137f6a84-trusted-ca\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.541064 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8c17950d-e37b-477d-87d9-d3a92b487ff3-registration-dir\") pod \"csi-hostpathplugin-cr54l\" (UID: \"8c17950d-e37b-477d-87d9-d3a92b487ff3\") " pod="hostpath-provisioner/csi-hostpathplugin-cr54l" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.541084 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/8c17950d-e37b-477d-87d9-d3a92b487ff3-csi-data-dir\") pod \"csi-hostpathplugin-cr54l\" (UID: \"8c17950d-e37b-477d-87d9-d3a92b487ff3\") " pod="hostpath-provisioner/csi-hostpathplugin-cr54l" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.541099 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8c17950d-e37b-477d-87d9-d3a92b487ff3-socket-dir\") pod \"csi-hostpathplugin-cr54l\" (UID: \"8c17950d-e37b-477d-87d9-d3a92b487ff3\") " pod="hostpath-provisioner/csi-hostpathplugin-cr54l" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.541131 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/28a7e740-6b3e-49a1-ac09-f802137f6a84-installation-pull-secrets\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.541144 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/28a7e740-6b3e-49a1-ac09-f802137f6a84-bound-sa-token\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.541187 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/8c17950d-e37b-477d-87d9-d3a92b487ff3-plugins-dir\") pod \"csi-hostpathplugin-cr54l\" (UID: \"8c17950d-e37b-477d-87d9-d3a92b487ff3\") " pod="hostpath-provisioner/csi-hostpathplugin-cr54l" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.541364 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/8c17950d-e37b-477d-87d9-d3a92b487ff3-mountpoint-dir\") pod \"csi-hostpathplugin-cr54l\" (UID: \"8c17950d-e37b-477d-87d9-d3a92b487ff3\") " pod="hostpath-provisioner/csi-hostpathplugin-cr54l" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.541389 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/28a7e740-6b3e-49a1-ac09-f802137f6a84-ca-trust-extracted\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.541853 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x6m7\" (UniqueName: \"kubernetes.io/projected/8c17950d-e37b-477d-87d9-d3a92b487ff3-kube-api-access-7x6m7\") pod \"csi-hostpathplugin-cr54l\" (UID: \"8c17950d-e37b-477d-87d9-d3a92b487ff3\") " pod="hostpath-provisioner/csi-hostpathplugin-cr54l" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.542672 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/28a7e740-6b3e-49a1-ac09-f802137f6a84-ca-trust-extracted\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.544183 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1bc42137-1969-4a7f-89d3-8ded4455ee64-proxy-tls\") pod \"machine-config-controller-84d6567774-4pxnp\" (UID: \"1bc42137-1969-4a7f-89d3-8ded4455ee64\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4pxnp" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.544267 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/28a7e740-6b3e-49a1-ac09-f802137f6a84-registry-tls\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.544360 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkxqg\" (UniqueName: \"kubernetes.io/projected/e29c0451-b95f-4ddd-ad98-f07a93aa5e5e-kube-api-access-rkxqg\") pod \"ingress-canary-x24fr\" (UID: \"e29c0451-b95f-4ddd-ad98-f07a93aa5e5e\") " pod="openshift-ingress-canary/ingress-canary-x24fr" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.544394 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/28a7e740-6b3e-49a1-ac09-f802137f6a84-registry-certificates\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.544368 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/28a7e740-6b3e-49a1-ac09-f802137f6a84-trusted-ca\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.544495 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rlsg\" (UniqueName: \"kubernetes.io/projected/28a7e740-6b3e-49a1-ac09-f802137f6a84-kube-api-access-9rlsg\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.544677 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1bc42137-1969-4a7f-89d3-8ded4455ee64-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4pxnp\" (UID: \"1bc42137-1969-4a7f-89d3-8ded4455ee64\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4pxnp" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.544805 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.544969 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrhgs\" (UniqueName: \"kubernetes.io/projected/1bc42137-1969-4a7f-89d3-8ded4455ee64-kube-api-access-nrhgs\") pod \"machine-config-controller-84d6567774-4pxnp\" (UID: \"1bc42137-1969-4a7f-89d3-8ded4455ee64\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4pxnp" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.545047 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e29c0451-b95f-4ddd-ad98-f07a93aa5e5e-cert\") pod \"ingress-canary-x24fr\" (UID: \"e29c0451-b95f-4ddd-ad98-f07a93aa5e5e\") " pod="openshift-ingress-canary/ingress-canary-x24fr" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.548412 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1bc42137-1969-4a7f-89d3-8ded4455ee64-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4pxnp\" (UID: \"1bc42137-1969-4a7f-89d3-8ded4455ee64\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4pxnp" Jan 30 06:47:13 crc kubenswrapper[4520]: E0130 06:47:13.548685 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:14.048671338 +0000 UTC m=+147.677023520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.549922 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/28a7e740-6b3e-49a1-ac09-f802137f6a84-registry-certificates\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.566789 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1bc42137-1969-4a7f-89d3-8ded4455ee64-proxy-tls\") pod \"machine-config-controller-84d6567774-4pxnp\" (UID: \"1bc42137-1969-4a7f-89d3-8ded4455ee64\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4pxnp" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.572487 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/28a7e740-6b3e-49a1-ac09-f802137f6a84-registry-tls\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.572907 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/28a7e740-6b3e-49a1-ac09-f802137f6a84-installation-pull-secrets\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.578941 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rlsg\" (UniqueName: \"kubernetes.io/projected/28a7e740-6b3e-49a1-ac09-f802137f6a84-kube-api-access-9rlsg\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.637982 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/28a7e740-6b3e-49a1-ac09-f802137f6a84-bound-sa-token\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.649024 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:13 crc kubenswrapper[4520]: E0130 06:47:13.649142 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:14.149126492 +0000 UTC m=+147.777478674 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.649165 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x6m7\" (UniqueName: \"kubernetes.io/projected/8c17950d-e37b-477d-87d9-d3a92b487ff3-kube-api-access-7x6m7\") pod \"csi-hostpathplugin-cr54l\" (UID: \"8c17950d-e37b-477d-87d9-d3a92b487ff3\") " pod="hostpath-provisioner/csi-hostpathplugin-cr54l" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.649221 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkxqg\" (UniqueName: \"kubernetes.io/projected/e29c0451-b95f-4ddd-ad98-f07a93aa5e5e-kube-api-access-rkxqg\") pod \"ingress-canary-x24fr\" (UID: \"e29c0451-b95f-4ddd-ad98-f07a93aa5e5e\") " pod="openshift-ingress-canary/ingress-canary-x24fr" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.649261 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.649289 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e29c0451-b95f-4ddd-ad98-f07a93aa5e5e-cert\") pod \"ingress-canary-x24fr\" (UID: \"e29c0451-b95f-4ddd-ad98-f07a93aa5e5e\") " pod="openshift-ingress-canary/ingress-canary-x24fr" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.649311 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8c17950d-e37b-477d-87d9-d3a92b487ff3-registration-dir\") pod \"csi-hostpathplugin-cr54l\" (UID: \"8c17950d-e37b-477d-87d9-d3a92b487ff3\") " pod="hostpath-provisioner/csi-hostpathplugin-cr54l" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.649325 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrhgs\" (UniqueName: \"kubernetes.io/projected/1bc42137-1969-4a7f-89d3-8ded4455ee64-kube-api-access-nrhgs\") pod \"machine-config-controller-84d6567774-4pxnp\" (UID: \"1bc42137-1969-4a7f-89d3-8ded4455ee64\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4pxnp" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.649371 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8c17950d-e37b-477d-87d9-d3a92b487ff3-socket-dir\") pod \"csi-hostpathplugin-cr54l\" (UID: \"8c17950d-e37b-477d-87d9-d3a92b487ff3\") " pod="hostpath-provisioner/csi-hostpathplugin-cr54l" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.649396 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/8c17950d-e37b-477d-87d9-d3a92b487ff3-csi-data-dir\") pod \"csi-hostpathplugin-cr54l\" (UID: \"8c17950d-e37b-477d-87d9-d3a92b487ff3\") " pod="hostpath-provisioner/csi-hostpathplugin-cr54l" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.649430 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/8c17950d-e37b-477d-87d9-d3a92b487ff3-plugins-dir\") pod \"csi-hostpathplugin-cr54l\" (UID: \"8c17950d-e37b-477d-87d9-d3a92b487ff3\") " pod="hostpath-provisioner/csi-hostpathplugin-cr54l" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.649442 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8c17950d-e37b-477d-87d9-d3a92b487ff3-socket-dir\") pod \"csi-hostpathplugin-cr54l\" (UID: \"8c17950d-e37b-477d-87d9-d3a92b487ff3\") " pod="hostpath-provisioner/csi-hostpathplugin-cr54l" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.649483 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/8c17950d-e37b-477d-87d9-d3a92b487ff3-mountpoint-dir\") pod \"csi-hostpathplugin-cr54l\" (UID: \"8c17950d-e37b-477d-87d9-d3a92b487ff3\") " pod="hostpath-provisioner/csi-hostpathplugin-cr54l" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.649489 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/8c17950d-e37b-477d-87d9-d3a92b487ff3-plugins-dir\") pod \"csi-hostpathplugin-cr54l\" (UID: \"8c17950d-e37b-477d-87d9-d3a92b487ff3\") " pod="hostpath-provisioner/csi-hostpathplugin-cr54l" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.649602 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8c17950d-e37b-477d-87d9-d3a92b487ff3-registration-dir\") pod \"csi-hostpathplugin-cr54l\" (UID: \"8c17950d-e37b-477d-87d9-d3a92b487ff3\") " pod="hostpath-provisioner/csi-hostpathplugin-cr54l" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.649647 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/8c17950d-e37b-477d-87d9-d3a92b487ff3-csi-data-dir\") pod \"csi-hostpathplugin-cr54l\" (UID: \"8c17950d-e37b-477d-87d9-d3a92b487ff3\") " pod="hostpath-provisioner/csi-hostpathplugin-cr54l" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.649670 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/8c17950d-e37b-477d-87d9-d3a92b487ff3-mountpoint-dir\") pod \"csi-hostpathplugin-cr54l\" (UID: \"8c17950d-e37b-477d-87d9-d3a92b487ff3\") " pod="hostpath-provisioner/csi-hostpathplugin-cr54l" Jan 30 06:47:13 crc kubenswrapper[4520]: E0130 06:47:13.649936 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:14.149925877 +0000 UTC m=+147.778278059 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.656925 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e29c0451-b95f-4ddd-ad98-f07a93aa5e5e-cert\") pod \"ingress-canary-x24fr\" (UID: \"e29c0451-b95f-4ddd-ad98-f07a93aa5e5e\") " pod="openshift-ingress-canary/ingress-canary-x24fr" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.700834 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x6m7\" (UniqueName: \"kubernetes.io/projected/8c17950d-e37b-477d-87d9-d3a92b487ff3-kube-api-access-7x6m7\") pod \"csi-hostpathplugin-cr54l\" (UID: \"8c17950d-e37b-477d-87d9-d3a92b487ff3\") " pod="hostpath-provisioner/csi-hostpathplugin-cr54l" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.749562 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkxqg\" (UniqueName: \"kubernetes.io/projected/e29c0451-b95f-4ddd-ad98-f07a93aa5e5e-kube-api-access-rkxqg\") pod \"ingress-canary-x24fr\" (UID: \"e29c0451-b95f-4ddd-ad98-f07a93aa5e5e\") " pod="openshift-ingress-canary/ingress-canary-x24fr" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.749843 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:13 crc kubenswrapper[4520]: E0130 06:47:13.750167 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:14.250151871 +0000 UTC m=+147.878504042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.758529 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4pxnp" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.851333 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:13 crc kubenswrapper[4520]: E0130 06:47:13.851801 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:14.351751569 +0000 UTC m=+147.980103750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.861599 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-s6bks"] Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.861741 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-x24fr" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.878697 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-cr54l" Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.891942 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2vpl2"] Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.892201 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bdjcm"] Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.952352 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:13 crc kubenswrapper[4520]: E0130 06:47:13.952662 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:14.45265033 +0000 UTC m=+148.081002511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.996381 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bhzlz"] Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.996409 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-w62kb"] Jan 30 06:47:13 crc kubenswrapper[4520]: I0130 06:47:13.997999 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hjhkn"] Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.004186 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qv6cz"] Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.059426 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:14 crc kubenswrapper[4520]: E0130 06:47:14.059967 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:14.559956972 +0000 UTC m=+148.188309143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.122608 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjb69"] Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.160959 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:14 crc kubenswrapper[4520]: E0130 06:47:14.161231 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:14.661218665 +0000 UTC m=+148.289570845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.302508 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:14 crc kubenswrapper[4520]: E0130 06:47:14.303456 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:14.803442747 +0000 UTC m=+148.431794928 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.347097 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-bd6fq"] Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.353607 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-fbccj"] Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.354715 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-nkbdc"] Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.408252 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:14 crc kubenswrapper[4520]: E0130 06:47:14.408565 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:14.908552755 +0000 UTC m=+148.536904936 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.424431 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-sks8c" event={"ID":"b0dc81d4-052e-46df-a17e-4461ccf8a64d","Type":"ContainerStarted","Data":"2fbe669edc625c968233a13f18ed820f210fde8b351c9a76d5d77798e099c760"} Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.443380 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hzv4j"] Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.445709 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"4cc24ab965ac30dbc7ef8041f3c66f50153f46050b07cc76f62426dc8a287c74"} Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.445744 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"57613cac288b19bb4569f15c71dc5294320e8fb9bd38c29a2c26fc783cd4ccec"} Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.446315 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.469383 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"5a2924327b049af242066ef7f64af3bde4ea2abfc49f013859ebd101b3b168e5"} Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.469436 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"fbd0f638f1cd4091821597caf6aa70b52db9616a37eb3274d6e40f30b363d841"} Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.489804 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-4526b" event={"ID":"e209fbc5-b75f-4fe7-829b-351ce502929e","Type":"ContainerStarted","Data":"6f24d2098574763ea35b829cc7fc2863a159bcc4704e68a8789d91396394fbd4"} Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.508067 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-lflpb" event={"ID":"f56326ab-bf4f-43c5-8762-85cb71c93f0a","Type":"ContainerStarted","Data":"ec2cb10134106e2679aeb5d313bcf49d525bea7b99f213deac75eadcfca697a2"} Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.508212 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-lflpb" event={"ID":"f56326ab-bf4f-43c5-8762-85cb71c93f0a","Type":"ContainerStarted","Data":"98604ee4fff45f3bbfccd9f552d3f5195e0a7615cec31bc6efd660e442dcf777"} Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.508558 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-lflpb" Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.508963 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:14 crc kubenswrapper[4520]: E0130 06:47:14.509207 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:15.009198298 +0000 UTC m=+148.637550479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.509649 4520 patch_prober.go:28] interesting pod/downloads-7954f5f757-lflpb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.509715 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lflpb" podUID="f56326ab-bf4f-43c5-8762-85cb71c93f0a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.526862 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bhzlz" event={"ID":"b1b628dc-8ac5-4463-bcdd-b573fa6c1e80","Type":"ContainerStarted","Data":"0ff2e3e19f993d2fd56c0156c9a01ab2ebffa08136c566d2a5421936721adb7b"} Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.529147 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-w62kb" event={"ID":"a7374ef9-1396-4293-b711-fb07eaa512d0","Type":"ContainerStarted","Data":"716279b261fb211cf073b73cd6e9ef4edfa9d2236134ff17c4ef04ebce50d0bf"} Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.545018 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qv6cz" event={"ID":"755be20c-e623-49b4-8c1b-97f651a664f7","Type":"ContainerStarted","Data":"9f8ef1c596ef7263d698b9d94b273edf5d393284a1fd9bba3363952bee56be55"} Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.546770 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2vpl2" event={"ID":"63191221-7520-4517-aeed-6d3896c2cad1","Type":"ContainerStarted","Data":"319c17924b0819522027ad383f8b9f67711e2cdc42834adc8080568cd8d5f5a9"} Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.547363 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hjhkn" event={"ID":"7b49a935-c5ef-4290-a394-ff47774b9172","Type":"ContainerStarted","Data":"c616ae779fff045420152687215baa99fdf53cab9e83bc2fea8069c78d170377"} Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.547944 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-s6bks" event={"ID":"2b9d0f20-53d1-4142-b961-55d553553aed","Type":"ContainerStarted","Data":"fc9e877d0bcee49c2fbe7c2dd6db55b8ae56c0957a886e32a1261f36b0bf7eee"} Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.575315 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"23c27bafcddbdee842815393409b6a9aa1191d319aac347deb960b18ace36162"} Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.575350 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f2236ebd94236e6e78d4c7e69bf0b15cd69ce3f67abe55ad2b62649a6f986070"} Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.613868 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:14 crc kubenswrapper[4520]: E0130 06:47:14.614707 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:15.114693289 +0000 UTC m=+148.743045471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.655957 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-w7xl2" event={"ID":"23b08d0a-4aa5-43be-a498-55e54d6e8c31","Type":"ContainerStarted","Data":"730c7d86939b8b22ada65f588ab575155a69e61ec1dcaabe2668edc0c804436a"} Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.656750 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-w7xl2" Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.665323 4520 patch_prober.go:28] interesting pod/console-operator-58897d9998-w7xl2 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/readyz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.665372 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-w7xl2" podUID="23b08d0a-4aa5-43be-a498-55e54d6e8c31" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/readyz\": dial tcp 10.217.0.19:8443: connect: connection refused" Jan 30 06:47:14 crc kubenswrapper[4520]: W0130 06:47:14.677644 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2f78b20_5b64_4fb1_8b47_9053654b33a5.slice/crio-7d5ffe5cae80f605d84154c8931fcb75d263b99138d4c40b77e38fc36a398a12 WatchSource:0}: Error finding container 7d5ffe5cae80f605d84154c8931fcb75d263b99138d4c40b77e38fc36a398a12: Status 404 returned error can't find the container with id 7d5ffe5cae80f605d84154c8931fcb75d263b99138d4c40b77e38fc36a398a12 Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.678508 4520 generic.go:334] "Generic (PLEG): container finished" podID="4a3be9f1-bd40-4667-bdd7-2cf23292fab5" containerID="8c9300c191e501f7195fb6ff0c3c007a9a2eaf69b4c4cb5764f5b644c4dc4e50" exitCode=0 Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.679339 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" event={"ID":"4a3be9f1-bd40-4667-bdd7-2cf23292fab5","Type":"ContainerDied","Data":"8c9300c191e501f7195fb6ff0c3c007a9a2eaf69b4c4cb5764f5b644c4dc4e50"} Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.690003 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rck29" podStartSLOduration=126.689972611 podStartE2EDuration="2m6.689972611s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:14.626366654 +0000 UTC m=+148.254718835" watchObservedRunningTime="2026-01-30 06:47:14.689972611 +0000 UTC m=+148.318324792" Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.715781 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:14 crc kubenswrapper[4520]: E0130 06:47:14.716981 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:15.216967649 +0000 UTC m=+148.845319830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.754258 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-z67kf" event={"ID":"a7229bd1-5891-4654-ad14-c0efed77e9b7","Type":"ContainerStarted","Data":"006600cac7e418a55d4cf60198015c09f266951c4e9362b8fb5766610bcd80a4"} Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.759437 4520 csr.go:261] certificate signing request csr-6hmpw is approved, waiting to be issued Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.764423 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-782cc" podStartSLOduration=126.764390491 podStartE2EDuration="2m6.764390491s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:14.760656922 +0000 UTC m=+148.389009103" watchObservedRunningTime="2026-01-30 06:47:14.764390491 +0000 UTC m=+148.392742672" Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.771330 4520 csr.go:257] certificate signing request csr-6hmpw is issued Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.789881 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bdjcm" event={"ID":"4d23e44d-fbe6-40d1-8d6e-bf19cc751be8","Type":"ContainerStarted","Data":"44e67f19257671f96bbd9732dde68496901fa1d1bbd27e102556d45c7a43eedd"} Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.816948 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:14 crc kubenswrapper[4520]: E0130 06:47:14.817325 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:15.317312495 +0000 UTC m=+148.945664676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.817703 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-dqjws" event={"ID":"22d49062-540d-414e-b0c6-2c20d411fa71","Type":"ContainerStarted","Data":"ba69a6990fa6e19ab27b958a5d3beb06a49879a3abc4ad5364b14731faa4ac91"} Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.817740 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-dqjws" event={"ID":"22d49062-540d-414e-b0c6-2c20d411fa71","Type":"ContainerStarted","Data":"5e64554c53c18f9379ed3ac22715a3b06d3650e029e8d93968cb6c0f3f57451d"} Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.843624 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ll7nf" event={"ID":"b8ab10e4-5a02-445b-8788-1ed64c22c9e3","Type":"ContainerStarted","Data":"c0d40f4b2d8137b53232de9bd868b967aada2684f5a5849d3625b42f9f5fc4e1"} Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.855410 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6n75g" event={"ID":"f97d3be8-69cc-4005-aa61-9ff3f6c72287","Type":"ContainerStarted","Data":"f0696cec274f27706262450ac59026e04354e7f52b643279cba6270acb21374a"} Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.866576 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-8jk9c" Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.869204 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqjqj" Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.869593 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:47:14 crc kubenswrapper[4520]: I0130 06:47:14.925801 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:14 crc kubenswrapper[4520]: E0130 06:47:14.926341 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:15.42632568 +0000 UTC m=+149.054677860 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:15 crc kubenswrapper[4520]: I0130 06:47:15.027217 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:15 crc kubenswrapper[4520]: E0130 06:47:15.037650 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:15.537629778 +0000 UTC m=+149.165981959 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:15 crc kubenswrapper[4520]: I0130 06:47:15.167020 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:15 crc kubenswrapper[4520]: E0130 06:47:15.167501 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:15.667479636 +0000 UTC m=+149.295831817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:15 crc kubenswrapper[4520]: I0130 06:47:15.167944 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6sjr4"] Jan 30 06:47:15 crc kubenswrapper[4520]: I0130 06:47:15.176805 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-8jk9c" podStartSLOduration=127.165495631 podStartE2EDuration="2m7.165495631s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:15.15187964 +0000 UTC m=+148.780231821" watchObservedRunningTime="2026-01-30 06:47:15.165495631 +0000 UTC m=+148.793847812" Jan 30 06:47:15 crc kubenswrapper[4520]: I0130 06:47:15.234234 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqjqj" podStartSLOduration=127.234210827 podStartE2EDuration="2m7.234210827s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:15.207974497 +0000 UTC m=+148.836326678" watchObservedRunningTime="2026-01-30 06:47:15.234210827 +0000 UTC m=+148.862563007" Jan 30 06:47:15 crc kubenswrapper[4520]: I0130 06:47:15.245051 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qgkcs"] Jan 30 06:47:15 crc kubenswrapper[4520]: I0130 06:47:15.276013 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:15 crc kubenswrapper[4520]: E0130 06:47:15.276557 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:15.776537195 +0000 UTC m=+149.404889376 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:15 crc kubenswrapper[4520]: I0130 06:47:15.339733 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jhnpn"] Jan 30 06:47:15 crc kubenswrapper[4520]: I0130 06:47:15.357550 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fd76j"] Jan 30 06:47:15 crc kubenswrapper[4520]: I0130 06:47:15.377026 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nc9qp"] Jan 30 06:47:15 crc kubenswrapper[4520]: I0130 06:47:15.383116 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:15 crc kubenswrapper[4520]: E0130 06:47:15.383699 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:15.883679928 +0000 UTC m=+149.512032109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:15 crc kubenswrapper[4520]: I0130 06:47:15.395956 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-dqjws" podStartSLOduration=127.395932504 podStartE2EDuration="2m7.395932504s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:15.367655773 +0000 UTC m=+148.996007954" watchObservedRunningTime="2026-01-30 06:47:15.395932504 +0000 UTC m=+149.024284684" Jan 30 06:47:15 crc kubenswrapper[4520]: I0130 06:47:15.443604 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth"] Jan 30 06:47:15 crc kubenswrapper[4520]: I0130 06:47:15.493564 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:15 crc kubenswrapper[4520]: E0130 06:47:15.494120 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:15.994098148 +0000 UTC m=+149.622450329 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:15 crc kubenswrapper[4520]: W0130 06:47:15.573321 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d200a37_0276_4e2c_b7ef_98107be3f313.slice/crio-03578b1535977c81d8c7d50565409738cf3702af5bfc98a9fb07ec04c6c7fdbc WatchSource:0}: Error finding container 03578b1535977c81d8c7d50565409738cf3702af5bfc98a9fb07ec04c6c7fdbc: Status 404 returned error can't find the container with id 03578b1535977c81d8c7d50565409738cf3702af5bfc98a9fb07ec04c6c7fdbc Jan 30 06:47:15 crc kubenswrapper[4520]: I0130 06:47:15.584210 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-lflpb" podStartSLOduration=127.584196466 podStartE2EDuration="2m7.584196466s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:15.581888341 +0000 UTC m=+149.210240522" watchObservedRunningTime="2026-01-30 06:47:15.584196466 +0000 UTC m=+149.212548647" Jan 30 06:47:15 crc kubenswrapper[4520]: I0130 06:47:15.586832 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ljplq"] Jan 30 06:47:15 crc kubenswrapper[4520]: I0130 06:47:15.588729 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4pxnp"] Jan 30 06:47:15 crc kubenswrapper[4520]: I0130 06:47:15.596592 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:15 crc kubenswrapper[4520]: E0130 06:47:15.596888 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:16.09687843 +0000 UTC m=+149.725230611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:15 crc kubenswrapper[4520]: I0130 06:47:15.599617 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8pt4x"] Jan 30 06:47:15 crc kubenswrapper[4520]: I0130 06:47:15.625740 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495925-q62ms"] Jan 30 06:47:15 crc kubenswrapper[4520]: W0130 06:47:15.635786 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86dea262_c989_43a8_ae6e_e744012a5e07.slice/crio-c4962aa4990c78eeb89da6fab51f5515f2b1434603223c53931f024a6148e09e WatchSource:0}: Error finding container c4962aa4990c78eeb89da6fab51f5515f2b1434603223c53931f024a6148e09e: Status 404 returned error can't find the container with id c4962aa4990c78eeb89da6fab51f5515f2b1434603223c53931f024a6148e09e Jan 30 06:47:15 crc kubenswrapper[4520]: I0130 06:47:15.693934 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qln6b"] Jan 30 06:47:15 crc kubenswrapper[4520]: I0130 06:47:15.729621 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:15 crc kubenswrapper[4520]: E0130 06:47:15.730241 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:16.230226985 +0000 UTC m=+149.858579157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:15 crc kubenswrapper[4520]: I0130 06:47:15.732340 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-w7xl2" podStartSLOduration=127.73232784 podStartE2EDuration="2m7.73232784s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:15.730679737 +0000 UTC m=+149.359031918" watchObservedRunningTime="2026-01-30 06:47:15.73232784 +0000 UTC m=+149.360680020" Jan 30 06:47:15 crc kubenswrapper[4520]: I0130 06:47:15.769525 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-85d5l"] Jan 30 06:47:15 crc kubenswrapper[4520]: I0130 06:47:15.773062 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-30 06:42:14 +0000 UTC, rotation deadline is 2026-12-03 21:47:59.488360061 +0000 UTC Jan 30 06:47:15 crc kubenswrapper[4520]: I0130 06:47:15.773236 4520 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7383h0m43.715126938s for next certificate rotation Jan 30 06:47:15 crc kubenswrapper[4520]: I0130 06:47:15.773213 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wvf85"] Jan 30 06:47:15 crc kubenswrapper[4520]: I0130 06:47:15.788082 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6n75g" podStartSLOduration=127.78806826 podStartE2EDuration="2m7.78806826s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:15.747433255 +0000 UTC m=+149.375785437" watchObservedRunningTime="2026-01-30 06:47:15.78806826 +0000 UTC m=+149.416420441" Jan 30 06:47:15 crc kubenswrapper[4520]: I0130 06:47:15.830977 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:15 crc kubenswrapper[4520]: E0130 06:47:15.831392 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:16.331379993 +0000 UTC m=+149.959732174 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:15 crc kubenswrapper[4520]: I0130 06:47:15.860391 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-cr54l"] Jan 30 06:47:15 crc kubenswrapper[4520]: I0130 06:47:15.881116 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4m8ns"] Jan 30 06:47:15 crc kubenswrapper[4520]: I0130 06:47:15.906398 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-x24fr"] Jan 30 06:47:15 crc kubenswrapper[4520]: I0130 06:47:15.932533 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:15 crc kubenswrapper[4520]: E0130 06:47:15.933180 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:16.433162456 +0000 UTC m=+150.061514637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:15 crc kubenswrapper[4520]: I0130 06:47:15.989729 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4pxnp" event={"ID":"1bc42137-1969-4a7f-89d3-8ded4455ee64","Type":"ContainerStarted","Data":"8323d708e109df4d7ca8a7389dc123b31a1d8e51397113ff0c7dba71c1f76fd4"} Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.000903 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ljplq" event={"ID":"2f6039d5-8443-430a-9f72-26ffc3e3310c","Type":"ContainerStarted","Data":"e8428cc3535cbdac5f6e67a473df43a4b2c77d4805778c68ffc4908a411a69c6"} Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.031823 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fd76j" event={"ID":"7d200a37-0276-4e2c-b7ef-98107be3f313","Type":"ContainerStarted","Data":"03578b1535977c81d8c7d50565409738cf3702af5bfc98a9fb07ec04c6c7fdbc"} Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.034998 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:16 crc kubenswrapper[4520]: E0130 06:47:16.035255 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:16.535244142 +0000 UTC m=+150.163596323 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.051568 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-sks8c" event={"ID":"b0dc81d4-052e-46df-a17e-4461ccf8a64d","Type":"ContainerStarted","Data":"f3d25acc8006cd6f03ba8b34c5d2b1c31f4c66d5c47333583be8ed8df3dde38a"} Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.051611 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-sks8c" event={"ID":"b0dc81d4-052e-46df-a17e-4461ccf8a64d","Type":"ContainerStarted","Data":"b54b7d5f823f1be687b166376d32aceb5bac68b80d87dc0a032e63fbe5b02481"} Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.072313 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jhnpn" event={"ID":"4267d7ff-3907-40fe-ac79-e30e74e13476","Type":"ContainerStarted","Data":"401d945a90cefcfea1c5c8320b9389d6915f18dd6f8e78a071d606675ec4513b"} Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.084483 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ll7nf" event={"ID":"b8ab10e4-5a02-445b-8788-1ed64c22c9e3","Type":"ContainerStarted","Data":"4034e99b2ede0a2def47f3f8cd6931f8589b66565b0e055393cce2ef04080aff"} Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.087974 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-sks8c" podStartSLOduration=128.08796073 podStartE2EDuration="2m8.08796073s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:16.086265028 +0000 UTC m=+149.714617209" watchObservedRunningTime="2026-01-30 06:47:16.08796073 +0000 UTC m=+149.716312912" Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.106706 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-4526b" event={"ID":"e209fbc5-b75f-4fe7-829b-351ce502929e","Type":"ContainerStarted","Data":"2a5a1d5b10a09b8279ccc2f8072dc499dbc04121c5383a7348f048e6bd0ebfd2"} Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.116246 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8pt4x" event={"ID":"b3470d5b-3e9f-4d41-a992-77b47e35ac52","Type":"ContainerStarted","Data":"4ef0f058cf19383985a41bee3029a9e1bb6f061d7f0beb4997976158989e16e2"} Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.134579 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qgkcs" event={"ID":"dbaed70c-7770-412b-b469-4e5bedbb7df7","Type":"ContainerStarted","Data":"125ee5d406b20b6e25fb1e50d2975fae936bf8a02533e300db7984501ebc34df"} Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.135772 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:16 crc kubenswrapper[4520]: E0130 06:47:16.136442 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:16.636410025 +0000 UTC m=+150.264762206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.156956 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-ll7nf" podStartSLOduration=128.156937627 podStartE2EDuration="2m8.156937627s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:16.116530442 +0000 UTC m=+149.744882623" watchObservedRunningTime="2026-01-30 06:47:16.156937627 +0000 UTC m=+149.785289808" Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.159917 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wvf85" event={"ID":"03f68811-ba27-419e-afa9-1640c681b1fc","Type":"ContainerStarted","Data":"9c924e1b96aae636a7322a110791b4970e54d3a2ea6ef6b1544daea429de6d6c"} Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.176942 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-4526b" podStartSLOduration=6.176927578 podStartE2EDuration="6.176927578s" podCreationTimestamp="2026-01-30 06:47:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:16.156549326 +0000 UTC m=+149.784901507" watchObservedRunningTime="2026-01-30 06:47:16.176927578 +0000 UTC m=+149.805279759" Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.177138 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qgkcs" podStartSLOduration=128.177134919 podStartE2EDuration="2m8.177134919s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:16.177100113 +0000 UTC m=+149.805452293" watchObservedRunningTime="2026-01-30 06:47:16.177134919 +0000 UTC m=+149.805487090" Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.187024 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-nkbdc" event={"ID":"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b","Type":"ContainerStarted","Data":"e0ee93cdf9d69b336b883ad09cdcb8a49d8c3ce24241236e59262d082d023873"} Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.187055 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-nkbdc" event={"ID":"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b","Type":"ContainerStarted","Data":"fab458684c995ff699beeefb01e6147110c0e96e736306d53c4db0dc677c15fd"} Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.217181 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-z67kf" event={"ID":"a7229bd1-5891-4654-ad14-c0efed77e9b7","Type":"ContainerStarted","Data":"d290a10da9fd5e01b8337c522c5e6d92740e66c7432a53ceab083329faa1bf64"} Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.228600 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hjhkn" event={"ID":"7b49a935-c5ef-4290-a394-ff47774b9172","Type":"ContainerStarted","Data":"9fece40966d5d5dcbb85ee8b471a965b6f3dc8a15a524d0cd8f83797c48348ef"} Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.237634 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.237884 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bhzlz" event={"ID":"b1b628dc-8ac5-4463-bcdd-b573fa6c1e80","Type":"ContainerStarted","Data":"2f637982aae3b9e60121de0fbe597bf3f32d5c0c392484166f09e5f465db338c"} Jan 30 06:47:16 crc kubenswrapper[4520]: E0130 06:47:16.237929 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:16.737917671 +0000 UTC m=+150.366269852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.246288 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-nkbdc" podStartSLOduration=128.246267588 podStartE2EDuration="2m8.246267588s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:16.220682125 +0000 UTC m=+149.849034295" watchObservedRunningTime="2026-01-30 06:47:16.246267588 +0000 UTC m=+149.874619770" Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.247724 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-z67kf" podStartSLOduration=128.24771805 podStartE2EDuration="2m8.24771805s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:16.245945152 +0000 UTC m=+149.874297333" watchObservedRunningTime="2026-01-30 06:47:16.24771805 +0000 UTC m=+149.876070220" Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.249028 4520 generic.go:334] "Generic (PLEG): container finished" podID="63191221-7520-4517-aeed-6d3896c2cad1" containerID="44c5fb85172dc2538ef4ee33ce291bf18822dd7ea588fc597e48ec1c98a70648" exitCode=0 Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.249090 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2vpl2" event={"ID":"63191221-7520-4517-aeed-6d3896c2cad1","Type":"ContainerDied","Data":"44c5fb85172dc2538ef4ee33ce291bf18822dd7ea588fc597e48ec1c98a70648"} Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.262318 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bdjcm" event={"ID":"4d23e44d-fbe6-40d1-8d6e-bf19cc751be8","Type":"ContainerStarted","Data":"18b8b1ee6e58a55a4644536e8f7064810b0d32d7c3500c497d6e81f3d8bc6693"} Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.284336 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6sjr4" event={"ID":"622e7434-1ad5-41f3-9c60-bfafb7b6dd3a","Type":"ContainerStarted","Data":"3584c517768aee932ea164d762807a8a895332a0de13b8b93582679355290f6c"} Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.292337 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495925-q62ms" event={"ID":"350b6a45-2c99-453a-9e85-e97a1adc863d","Type":"ContainerStarted","Data":"20fde1c31d9fdb12e9c4c73dee4c6a50945c9239b4e7532525aeeff45e713d60"} Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.307793 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bhzlz" podStartSLOduration=128.307779935 podStartE2EDuration="2m8.307779935s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:16.273340713 +0000 UTC m=+149.901692894" watchObservedRunningTime="2026-01-30 06:47:16.307779935 +0000 UTC m=+149.936132116" Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.309467 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hjhkn" podStartSLOduration=128.30946156 podStartE2EDuration="2m8.30946156s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:16.307443641 +0000 UTC m=+149.935795812" watchObservedRunningTime="2026-01-30 06:47:16.30946156 +0000 UTC m=+149.937813741" Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.324643 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-s6bks" event={"ID":"2b9d0f20-53d1-4142-b961-55d553553aed","Type":"ContainerStarted","Data":"ef9e26cf0e0a5cf5a0a8a6975b44a0e701205182dc4d56af5b1362ac5e256305"} Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.330741 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" event={"ID":"d0267c2e-5b07-4578-bc73-2504b5300313","Type":"ContainerStarted","Data":"7748695294cffe377d254bc1c9403302e87371df099a48ca3975d9189e6c85b2"} Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.343313 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:16 crc kubenswrapper[4520]: E0130 06:47:16.343647 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:16.843622057 +0000 UTC m=+150.471974238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.343657 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bdjcm" podStartSLOduration=128.343647034 podStartE2EDuration="2m8.343647034s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:16.343347049 +0000 UTC m=+149.971699221" watchObservedRunningTime="2026-01-30 06:47:16.343647034 +0000 UTC m=+149.971999214" Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.343759 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:16 crc kubenswrapper[4520]: E0130 06:47:16.345365 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:16.845354137 +0000 UTC m=+150.473706318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.374993 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6sjr4" podStartSLOduration=128.374975989 podStartE2EDuration="2m8.374975989s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:16.373809724 +0000 UTC m=+150.002161905" watchObservedRunningTime="2026-01-30 06:47:16.374975989 +0000 UTC m=+150.003328170" Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.388824 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-cr54l" event={"ID":"8c17950d-e37b-477d-87d9-d3a92b487ff3","Type":"ContainerStarted","Data":"4d6b41b7e700666c9ec2c8ed280a9868ff71373ebd1b319227a74c70d4ceeedc"} Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.406683 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nc9qp" event={"ID":"82561e0e-8f14-4e88-adbb-b0a2b3d8760c","Type":"ContainerStarted","Data":"adaef91138177ba91feb246464529fdbe1ad3c64fe18f281e650d900cd064ae5"} Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.407951 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjb69" event={"ID":"5dfff538-11e7-4c6b-9db0-c26e2f6b6140","Type":"ContainerStarted","Data":"da664ef61a225672c63b48411b6b3c6bbe1dee91652f1875cb2dbaeb91621d35"} Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.407979 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjb69" event={"ID":"5dfff538-11e7-4c6b-9db0-c26e2f6b6140","Type":"ContainerStarted","Data":"e07f50f053d001193a1ff2ec0c6f7ca15b3de2b9244002666a2b6ac0fc675f97"} Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.408577 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjb69" Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.409301 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qln6b" event={"ID":"ba04cf12-8677-4024-9c2c-618dfc096d4d","Type":"ContainerStarted","Data":"106ba77b093e9dfe80b8ef17b911598b164bce49d454b7e9a04b3858e13355fd"} Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.409449 4520 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-bjb69 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.409474 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjb69" podUID="5dfff538-11e7-4c6b-9db0-c26e2f6b6140" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.448805 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fbccj" event={"ID":"a2f78b20-5b64-4fb1-8b47-9053654b33a5","Type":"ContainerStarted","Data":"7d5ffe5cae80f605d84154c8931fcb75d263b99138d4c40b77e38fc36a398a12"} Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.449383 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:16 crc kubenswrapper[4520]: E0130 06:47:16.449657 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:16.949644772 +0000 UTC m=+150.577996943 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.459180 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:16 crc kubenswrapper[4520]: E0130 06:47:16.460881 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:16.960869492 +0000 UTC m=+150.589221673 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.464658 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bd6fq" event={"ID":"5c4cb732-fc3d-4607-8051-d1ac81d4b9ad","Type":"ContainerStarted","Data":"cde0ecae0890b45d34765c73e1188d605ec8ee31c715a37af3be775a88ca307c"} Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.473223 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-w62kb" event={"ID":"a7374ef9-1396-4293-b711-fb07eaa512d0","Type":"ContainerStarted","Data":"9cd9976e4078ba6e7fcb0fbe74ab134a46ae3aa906d2a1aafd31cdf2cdad0780"} Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.475707 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qv6cz" podStartSLOduration=128.475696282 podStartE2EDuration="2m8.475696282s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:16.473023712 +0000 UTC m=+150.101375894" watchObservedRunningTime="2026-01-30 06:47:16.475696282 +0000 UTC m=+150.104048464" Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.497067 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-85d5l" event={"ID":"c033428a-1e35-46a7-a589-d2374d629f46","Type":"ContainerStarted","Data":"b2f8eea7ee86e95d876ab117291e7a8431d34c682e771bd4b0af9ffdacbeac13"} Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.506272 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" event={"ID":"86dea262-c989-43a8-ae6e-e744012a5e07","Type":"ContainerStarted","Data":"c4962aa4990c78eeb89da6fab51f5515f2b1434603223c53931f024a6148e09e"} Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.508761 4520 patch_prober.go:28] interesting pod/downloads-7954f5f757-lflpb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.508797 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lflpb" podUID="f56326ab-bf4f-43c5-8762-85cb71c93f0a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.532566 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-z67kf" Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.533706 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjb69" podStartSLOduration=128.533694273 podStartE2EDuration="2m8.533694273s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:16.532790802 +0000 UTC m=+150.161142983" watchObservedRunningTime="2026-01-30 06:47:16.533694273 +0000 UTC m=+150.162046454" Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.546266 4520 patch_prober.go:28] interesting pod/router-default-5444994796-z67kf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 06:47:16 crc kubenswrapper[4520]: [-]has-synced failed: reason withheld Jan 30 06:47:16 crc kubenswrapper[4520]: [+]process-running ok Jan 30 06:47:16 crc kubenswrapper[4520]: healthz check failed Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.546307 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z67kf" podUID="a7229bd1-5891-4654-ad14-c0efed77e9b7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.560743 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:16 crc kubenswrapper[4520]: E0130 06:47:16.562018 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:17.062002783 +0000 UTC m=+150.690354964 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.614094 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fbccj" podStartSLOduration=128.614082452 podStartE2EDuration="2m8.614082452s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:16.574416151 +0000 UTC m=+150.202768332" watchObservedRunningTime="2026-01-30 06:47:16.614082452 +0000 UTC m=+150.242434633" Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.668913 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:16 crc kubenswrapper[4520]: E0130 06:47:16.680936 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:17.180921895 +0000 UTC m=+150.809274077 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.733627 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-w62kb" podStartSLOduration=128.733613256 podStartE2EDuration="2m8.733613256s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:16.614268081 +0000 UTC m=+150.242620252" watchObservedRunningTime="2026-01-30 06:47:16.733613256 +0000 UTC m=+150.361965436" Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.775883 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:16 crc kubenswrapper[4520]: E0130 06:47:16.776051 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:17.27603334 +0000 UTC m=+150.904385521 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.776177 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:16 crc kubenswrapper[4520]: E0130 06:47:16.776408 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:17.276400883 +0000 UTC m=+150.904753063 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.877391 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:16 crc kubenswrapper[4520]: E0130 06:47:16.878131 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:17.378116981 +0000 UTC m=+151.006469162 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:16 crc kubenswrapper[4520]: I0130 06:47:16.979935 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:16 crc kubenswrapper[4520]: E0130 06:47:16.980210 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:17.480199308 +0000 UTC m=+151.108551480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.081156 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:17 crc kubenswrapper[4520]: E0130 06:47:17.081589 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:17.581575907 +0000 UTC m=+151.209928088 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.203320 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:17 crc kubenswrapper[4520]: E0130 06:47:17.203753 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:17.703743585 +0000 UTC m=+151.332095765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.306041 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:17 crc kubenswrapper[4520]: E0130 06:47:17.306221 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:17.806201269 +0000 UTC m=+151.434553439 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.306573 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:17 crc kubenswrapper[4520]: E0130 06:47:17.306879 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:17.806868905 +0000 UTC m=+151.435221086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.407768 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:17 crc kubenswrapper[4520]: E0130 06:47:17.408124 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:17.90811049 +0000 UTC m=+151.536462671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.495950 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-w7xl2" Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.510316 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:17 crc kubenswrapper[4520]: E0130 06:47:17.510628 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:18.010616596 +0000 UTC m=+151.638968778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.538696 4520 patch_prober.go:28] interesting pod/router-default-5444994796-z67kf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 06:47:17 crc kubenswrapper[4520]: [-]has-synced failed: reason withheld Jan 30 06:47:17 crc kubenswrapper[4520]: [+]process-running ok Jan 30 06:47:17 crc kubenswrapper[4520]: healthz check failed Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.538744 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z67kf" podUID="a7229bd1-5891-4654-ad14-c0efed77e9b7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.559461 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bd6fq" event={"ID":"5c4cb732-fc3d-4607-8051-d1ac81d4b9ad","Type":"ContainerStarted","Data":"6962f1c685a2adc85a81644df7364fcd6fd1085800fb606c834a9fa5fa5a7dbe"} Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.559493 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bd6fq" event={"ID":"5c4cb732-fc3d-4607-8051-d1ac81d4b9ad","Type":"ContainerStarted","Data":"f36cc7e54376e6615ae0be49fae2d8a91f27a779dde10a38f0375da093d3e2e5"} Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.560218 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-bd6fq" Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.561390 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" event={"ID":"86dea262-c989-43a8-ae6e-e744012a5e07","Type":"ContainerStarted","Data":"2a8dc7f17ef0190cbdf74fc24740afad70d3fa6a4f2eaaa8158a9e5aa4797021"} Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.562003 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.563339 4520 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-kcrth container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" start-of-body= Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.563361 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" podUID="86dea262-c989-43a8-ae6e-e744012a5e07" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.580971 4520 generic.go:334] "Generic (PLEG): container finished" podID="d0267c2e-5b07-4578-bc73-2504b5300313" containerID="0b227aeae0ecb4b846d2f25468f4e3a978a016cf29d61f9de5dd5e4124daf83c" exitCode=0 Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.581018 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" event={"ID":"d0267c2e-5b07-4578-bc73-2504b5300313","Type":"ContainerDied","Data":"0b227aeae0ecb4b846d2f25468f4e3a978a016cf29d61f9de5dd5e4124daf83c"} Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.592053 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-x24fr" event={"ID":"e29c0451-b95f-4ddd-ad98-f07a93aa5e5e","Type":"ContainerStarted","Data":"90ca4650ad378446da88bd3860f7a6e3e7d19d132657db5ee00831ea479d353d"} Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.592146 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-x24fr" event={"ID":"e29c0451-b95f-4ddd-ad98-f07a93aa5e5e","Type":"ContainerStarted","Data":"4796612072edf0c393eed82c816b6b0f2674ba5012b2c0d8ec975a4e3be87e24"} Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.600171 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wvf85" event={"ID":"03f68811-ba27-419e-afa9-1640c681b1fc","Type":"ContainerStarted","Data":"bfe5fb4449daac144acc42ac453843589e69897af098fb360234c1f47db5b1fa"} Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.611239 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:17 crc kubenswrapper[4520]: E0130 06:47:17.611742 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:18.111730772 +0000 UTC m=+151.740082952 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.613053 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fd76j" event={"ID":"7d200a37-0276-4e2c-b7ef-98107be3f313","Type":"ContainerStarted","Data":"4c7a0b73c98789922db0085dbcc6b8d30dd5128a5010abc97c9369dff2443b4e"} Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.613631 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-fd76j" Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.614492 4520 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fd76j container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.614593 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fd76j" podUID="7d200a37-0276-4e2c-b7ef-98107be3f313" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.624783 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qln6b" event={"ID":"ba04cf12-8677-4024-9c2c-618dfc096d4d","Type":"ContainerStarted","Data":"d7b4b2af2f092ff23b35bebc895219146d5190cbdd392e37f4f35eb3b403bbe1"} Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.625418 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qln6b" Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.629142 4520 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qln6b container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.629346 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qln6b" podUID="ba04cf12-8677-4024-9c2c-618dfc096d4d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.631608 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6sjr4" event={"ID":"622e7434-1ad5-41f3-9c60-bfafb7b6dd3a","Type":"ContainerStarted","Data":"00bd9bf60c7e9055abaf370b5a8d5c5088397e0e42e3e7372d6541fd57716a1e"} Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.633479 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-s6bks" event={"ID":"2b9d0f20-53d1-4142-b961-55d553553aed","Type":"ContainerStarted","Data":"72cf1ffa4973a5469257d4296ba6373a9f127b9660fdd8f6df837f58e1b15e4d"} Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.638786 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-bd6fq" podStartSLOduration=7.63876904 podStartE2EDuration="7.63876904s" podCreationTimestamp="2026-01-30 06:47:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:17.624342482 +0000 UTC m=+151.252694664" watchObservedRunningTime="2026-01-30 06:47:17.63876904 +0000 UTC m=+151.267121221" Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.643019 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-4m8ns" event={"ID":"0c769d70-6c64-4e67-ad6a-cb99f70c31c0","Type":"ContainerStarted","Data":"5ea4ced0127002b66069448c3384d6697773c97d4a78b1dd5f1be12546089093"} Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.643109 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-4m8ns" event={"ID":"0c769d70-6c64-4e67-ad6a-cb99f70c31c0","Type":"ContainerStarted","Data":"9d629d1eb0f6f75643e8b8aa5312fb1a9f3575c829d0b3edcda22cd6eb52212c"} Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.656277 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qgkcs" event={"ID":"dbaed70c-7770-412b-b469-4e5bedbb7df7","Type":"ContainerStarted","Data":"4e0759f72df922cb686eac5abc2ef9e9f40a770480aec21e3fbf29750c49ccea"} Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.666496 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qv6cz" event={"ID":"755be20c-e623-49b4-8c1b-97f651a664f7","Type":"ContainerStarted","Data":"4290c287ca5a1f75375f901b585d4eb9aeb3fa2f510d40b93112d54a1872ca95"} Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.669948 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ljplq" event={"ID":"2f6039d5-8443-430a-9f72-26ffc3e3310c","Type":"ContainerStarted","Data":"ed0f21adef5d0e30e6252ff00db4ec14767d05e8c127983f7491db162894c977"} Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.685614 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" event={"ID":"4a3be9f1-bd40-4667-bdd7-2cf23292fab5","Type":"ContainerStarted","Data":"9a92ed05126d510ec5a530553d10693808717b1cc3b29e7303d9aa7976089b5b"} Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.686286 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.709182 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hjhkn" event={"ID":"7b49a935-c5ef-4290-a394-ff47774b9172","Type":"ContainerStarted","Data":"34649e23658ec4b3425b7411523c6c0ce714d367b11ab8931119f8008543639c"} Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.712335 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:17 crc kubenswrapper[4520]: E0130 06:47:17.712593 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:18.212583083 +0000 UTC m=+151.840935264 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.735929 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495925-q62ms" event={"ID":"350b6a45-2c99-453a-9e85-e97a1adc863d","Type":"ContainerStarted","Data":"a36b9458379423d9fd6eff8752f0459f1728424413b43cc5badf4cfbf94e397b"} Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.747743 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4pxnp" event={"ID":"1bc42137-1969-4a7f-89d3-8ded4455ee64","Type":"ContainerStarted","Data":"d5c52f56bfb3687f81006cfa3f9e4f5936ccd4cae70ee2b02b19e67117bc5290"} Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.758287 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nc9qp" event={"ID":"82561e0e-8f14-4e88-adbb-b0a2b3d8760c","Type":"ContainerStarted","Data":"b48ea393042267008f2cd8ba8d19f809c216aa2faf7f65629646d4a0c47c65ae"} Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.758318 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nc9qp" event={"ID":"82561e0e-8f14-4e88-adbb-b0a2b3d8760c","Type":"ContainerStarted","Data":"1c6b2388c0ef12a2e8e6add882e89025f65e6eb7ea8c0af138282645a29d0b7f"} Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.758717 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nc9qp" Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.782567 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jhnpn" event={"ID":"4267d7ff-3907-40fe-ac79-e30e74e13476","Type":"ContainerStarted","Data":"59c54e1bfd80c7e608008922fcf1904aea3394c09287641e1120d0740a7afee5"} Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.782613 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jhnpn" event={"ID":"4267d7ff-3907-40fe-ac79-e30e74e13476","Type":"ContainerStarted","Data":"de42ae8effd6e0c7091e1db2f67cec9fd4fee655f280e36ec186ce18307dffaa"} Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.810116 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8pt4x" event={"ID":"b3470d5b-3e9f-4d41-a992-77b47e35ac52","Type":"ContainerStarted","Data":"f7dec308b7c3a1827079eed94163a7659902d7e4c2840a07140b9840f5b88b3e"} Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.811390 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-fd76j" podStartSLOduration=129.811380146 podStartE2EDuration="2m9.811380146s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:17.736506198 +0000 UTC m=+151.364858379" watchObservedRunningTime="2026-01-30 06:47:17.811380146 +0000 UTC m=+151.439732328" Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.811677 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wvf85" podStartSLOduration=129.811673098 podStartE2EDuration="2m9.811673098s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:17.810599607 +0000 UTC m=+151.438951788" watchObservedRunningTime="2026-01-30 06:47:17.811673098 +0000 UTC m=+151.440025269" Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.813525 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:17 crc kubenswrapper[4520]: E0130 06:47:17.814677 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:18.314663527 +0000 UTC m=+151.943015708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.834357 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fbccj" event={"ID":"a2f78b20-5b64-4fb1-8b47-9053654b33a5","Type":"ContainerStarted","Data":"1476e3a665f4d3d49de1910e807cd035c90efa2930c442b9478af2699db54325"} Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.888073 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjb69" Jan 30 06:47:17 crc kubenswrapper[4520]: I0130 06:47:17.921815 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:17 crc kubenswrapper[4520]: E0130 06:47:17.935336 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:18.435305933 +0000 UTC m=+152.063658114 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.025818 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.025940 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" podStartSLOduration=130.025925403 podStartE2EDuration="2m10.025925403s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:18.0161814 +0000 UTC m=+151.644533581" watchObservedRunningTime="2026-01-30 06:47:18.025925403 +0000 UTC m=+151.654277583" Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.026015 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-x24fr" podStartSLOduration=8.026010332 podStartE2EDuration="8.026010332s" podCreationTimestamp="2026-01-30 06:47:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:17.892092345 +0000 UTC m=+151.520444526" watchObservedRunningTime="2026-01-30 06:47:18.026010332 +0000 UTC m=+151.654362513" Jan 30 06:47:18 crc kubenswrapper[4520]: E0130 06:47:18.026079 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:18.526064484 +0000 UTC m=+152.154416656 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.082537 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" podStartSLOduration=130.082523477 podStartE2EDuration="2m10.082523477s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:18.081570724 +0000 UTC m=+151.709922905" watchObservedRunningTime="2026-01-30 06:47:18.082523477 +0000 UTC m=+151.710875658" Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.128900 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:18 crc kubenswrapper[4520]: E0130 06:47:18.129251 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:18.62924038 +0000 UTC m=+152.257592551 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.192212 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-4m8ns" podStartSLOduration=130.192190883 podStartE2EDuration="2m10.192190883s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:18.133902027 +0000 UTC m=+151.762254207" watchObservedRunningTime="2026-01-30 06:47:18.192190883 +0000 UTC m=+151.820543064" Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.230200 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:18 crc kubenswrapper[4520]: E0130 06:47:18.230381 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:18.730354976 +0000 UTC m=+152.358707157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.230616 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:18 crc kubenswrapper[4520]: E0130 06:47:18.230961 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:18.730952772 +0000 UTC m=+152.359304953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.332117 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:18 crc kubenswrapper[4520]: E0130 06:47:18.332431 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:18.832418689 +0000 UTC m=+152.460770869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.433194 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:18 crc kubenswrapper[4520]: E0130 06:47:18.433562 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:18.933552049 +0000 UTC m=+152.561904231 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.452461 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ljplq" podStartSLOduration=130.452438624 podStartE2EDuration="2m10.452438624s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:18.276076827 +0000 UTC m=+151.904429007" watchObservedRunningTime="2026-01-30 06:47:18.452438624 +0000 UTC m=+152.080790804" Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.534349 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:18 crc kubenswrapper[4520]: E0130 06:47:18.534718 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:19.03470602 +0000 UTC m=+152.663058201 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.537018 4520 patch_prober.go:28] interesting pod/router-default-5444994796-z67kf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 06:47:18 crc kubenswrapper[4520]: [-]has-synced failed: reason withheld Jan 30 06:47:18 crc kubenswrapper[4520]: [+]process-running ok Jan 30 06:47:18 crc kubenswrapper[4520]: healthz check failed Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.537048 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z67kf" podUID="a7229bd1-5891-4654-ad14-c0efed77e9b7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.541598 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-s6bks" podStartSLOduration=130.541585791 podStartE2EDuration="2m10.541585791s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:18.459575749 +0000 UTC m=+152.087927931" watchObservedRunningTime="2026-01-30 06:47:18.541585791 +0000 UTC m=+152.169937972" Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.542040 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qln6b" podStartSLOduration=130.542036719 podStartE2EDuration="2m10.542036719s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:18.539830818 +0000 UTC m=+152.168182998" watchObservedRunningTime="2026-01-30 06:47:18.542036719 +0000 UTC m=+152.170388891" Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.586879 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29495925-q62ms" podStartSLOduration=130.586872141 podStartE2EDuration="2m10.586872141s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:18.584432459 +0000 UTC m=+152.212784639" watchObservedRunningTime="2026-01-30 06:47:18.586872141 +0000 UTC m=+152.215224322" Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.612416 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nc9qp" podStartSLOduration=130.612406269 podStartE2EDuration="2m10.612406269s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:18.609475942 +0000 UTC m=+152.237828123" watchObservedRunningTime="2026-01-30 06:47:18.612406269 +0000 UTC m=+152.240758440" Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.636360 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:18 crc kubenswrapper[4520]: E0130 06:47:18.636724 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:19.136713316 +0000 UTC m=+152.765065496 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.641237 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jhnpn" podStartSLOduration=130.641226733 podStartE2EDuration="2m10.641226733s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:18.640124909 +0000 UTC m=+152.268477079" watchObservedRunningTime="2026-01-30 06:47:18.641226733 +0000 UTC m=+152.269578914" Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.737107 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:18 crc kubenswrapper[4520]: E0130 06:47:18.737378 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:19.237366092 +0000 UTC m=+152.865718274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.838221 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:18 crc kubenswrapper[4520]: E0130 06:47:18.838707 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:19.338685244 +0000 UTC m=+152.967037425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.840464 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8pt4x" event={"ID":"b3470d5b-3e9f-4d41-a992-77b47e35ac52","Type":"ContainerStarted","Data":"4c5fbd544edc86f36230dc6809be04865e8e20ef1cf4d79358fc6694a31dbc82"} Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.843899 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-85d5l" event={"ID":"c033428a-1e35-46a7-a589-d2374d629f46","Type":"ContainerStarted","Data":"4f04502471d96ce28d1a95baf887dcabc12554814b327a77377856264d285f44"} Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.843923 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-85d5l" event={"ID":"c033428a-1e35-46a7-a589-d2374d629f46","Type":"ContainerStarted","Data":"158032a3352ad87e0faa44f8ee1846f27538174276e3492419a58f8f8e00a02a"} Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.846377 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" event={"ID":"d0267c2e-5b07-4578-bc73-2504b5300313","Type":"ContainerStarted","Data":"8e3d80bfed96bfb0781c8f913a12d6763f7aa0948b946cf2f18f4ba6cf4a587c"} Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.846400 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" event={"ID":"d0267c2e-5b07-4578-bc73-2504b5300313","Type":"ContainerStarted","Data":"e71d7754de3e74d6ad5870ce8040e982a44058bdc978a9414f09437fcba44295"} Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.850654 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-cr54l" event={"ID":"8c17950d-e37b-477d-87d9-d3a92b487ff3","Type":"ContainerStarted","Data":"14b22f7100b7a181ebe3b8844fd89a615683163256ca3836904e986dc894d194"} Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.852247 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4pxnp" event={"ID":"1bc42137-1969-4a7f-89d3-8ded4455ee64","Type":"ContainerStarted","Data":"2d1a4b37d97867e15af0762879780d69faae206963bac9661b37180f63c17ac5"} Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.854319 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2vpl2" event={"ID":"63191221-7520-4517-aeed-6d3896c2cad1","Type":"ContainerStarted","Data":"6ae5a59636e69c300c6046adfc5bc24d0a90d25870f2a35d20e2f4bb874b4f6c"} Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.855460 4520 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fd76j container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.855570 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fd76j" podUID="7d200a37-0276-4e2c-b7ef-98107be3f313" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.862659 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qln6b" Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.866475 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8pt4x" podStartSLOduration=130.866466591 podStartE2EDuration="2m10.866466591s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:18.865182183 +0000 UTC m=+152.493534364" watchObservedRunningTime="2026-01-30 06:47:18.866466591 +0000 UTC m=+152.494818773" Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.905792 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2vpl2" podStartSLOduration=130.905783965 podStartE2EDuration="2m10.905783965s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:18.903428581 +0000 UTC m=+152.531780762" watchObservedRunningTime="2026-01-30 06:47:18.905783965 +0000 UTC m=+152.534136146" Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.917441 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-85d5l" podStartSLOduration=130.917434617 podStartE2EDuration="2m10.917434617s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:18.915677008 +0000 UTC m=+152.544029179" watchObservedRunningTime="2026-01-30 06:47:18.917434617 +0000 UTC m=+152.545786798" Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.943437 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:18 crc kubenswrapper[4520]: E0130 06:47:18.943606 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:19.443589163 +0000 UTC m=+153.071941334 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.943839 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:18 crc kubenswrapper[4520]: E0130 06:47:18.944498 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:19.444486952 +0000 UTC m=+153.072839134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.975775 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" podStartSLOduration=130.975760524 podStartE2EDuration="2m10.975760524s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:18.9715937 +0000 UTC m=+152.599945881" watchObservedRunningTime="2026-01-30 06:47:18.975760524 +0000 UTC m=+152.604112705" Jan 30 06:47:18 crc kubenswrapper[4520]: I0130 06:47:18.991834 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4pxnp" podStartSLOduration=130.991828101 podStartE2EDuration="2m10.991828101s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:18.991426194 +0000 UTC m=+152.619778376" watchObservedRunningTime="2026-01-30 06:47:18.991828101 +0000 UTC m=+152.620180283" Jan 30 06:47:19 crc kubenswrapper[4520]: I0130 06:47:19.045213 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:19 crc kubenswrapper[4520]: E0130 06:47:19.045344 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:19.545325939 +0000 UTC m=+153.173678111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:19 crc kubenswrapper[4520]: I0130 06:47:19.047595 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:19 crc kubenswrapper[4520]: E0130 06:47:19.047887 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:19.547876521 +0000 UTC m=+153.176228702 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:19 crc kubenswrapper[4520]: I0130 06:47:19.154045 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:19 crc kubenswrapper[4520]: E0130 06:47:19.154143 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:19.654130391 +0000 UTC m=+153.282482573 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:19 crc kubenswrapper[4520]: I0130 06:47:19.154281 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:19 crc kubenswrapper[4520]: E0130 06:47:19.154501 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:19.654494116 +0000 UTC m=+153.282846298 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:19 crc kubenswrapper[4520]: I0130 06:47:19.254839 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:19 crc kubenswrapper[4520]: E0130 06:47:19.255427 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:19.755408304 +0000 UTC m=+153.383760485 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:19 crc kubenswrapper[4520]: I0130 06:47:19.356632 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:19 crc kubenswrapper[4520]: E0130 06:47:19.356976 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:19.856962088 +0000 UTC m=+153.485314268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:19 crc kubenswrapper[4520]: I0130 06:47:19.458601 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:19 crc kubenswrapper[4520]: E0130 06:47:19.458714 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:19.958697131 +0000 UTC m=+153.587049312 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:19 crc kubenswrapper[4520]: I0130 06:47:19.458881 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:19 crc kubenswrapper[4520]: E0130 06:47:19.459139 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:19.959132882 +0000 UTC m=+153.587485053 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:19 crc kubenswrapper[4520]: I0130 06:47:19.505413 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" Jan 30 06:47:19 crc kubenswrapper[4520]: I0130 06:47:19.532829 4520 patch_prober.go:28] interesting pod/router-default-5444994796-z67kf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 06:47:19 crc kubenswrapper[4520]: [-]has-synced failed: reason withheld Jan 30 06:47:19 crc kubenswrapper[4520]: [+]process-running ok Jan 30 06:47:19 crc kubenswrapper[4520]: healthz check failed Jan 30 06:47:19 crc kubenswrapper[4520]: I0130 06:47:19.532873 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z67kf" podUID="a7229bd1-5891-4654-ad14-c0efed77e9b7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 06:47:19 crc kubenswrapper[4520]: I0130 06:47:19.560104 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:19 crc kubenswrapper[4520]: E0130 06:47:19.560232 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:20.06021717 +0000 UTC m=+153.688569351 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:19 crc kubenswrapper[4520]: I0130 06:47:19.560392 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:19 crc kubenswrapper[4520]: E0130 06:47:19.560698 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:20.060686343 +0000 UTC m=+153.689038524 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:19 crc kubenswrapper[4520]: I0130 06:47:19.661097 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:19 crc kubenswrapper[4520]: E0130 06:47:19.661237 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:20.161212762 +0000 UTC m=+153.789564944 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:19 crc kubenswrapper[4520]: I0130 06:47:19.661316 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:19 crc kubenswrapper[4520]: E0130 06:47:19.661552 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:20.161543805 +0000 UTC m=+153.789895976 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:19 crc kubenswrapper[4520]: I0130 06:47:19.761842 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:19 crc kubenswrapper[4520]: E0130 06:47:19.762093 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:20.262072548 +0000 UTC m=+153.890424730 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:19 crc kubenswrapper[4520]: I0130 06:47:19.762296 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:19 crc kubenswrapper[4520]: E0130 06:47:19.762550 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:20.262541471 +0000 UTC m=+153.890893641 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:19 crc kubenswrapper[4520]: I0130 06:47:19.855634 4520 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-kcrth container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 06:47:19 crc kubenswrapper[4520]: I0130 06:47:19.855683 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" podUID="86dea262-c989-43a8-ae6e-e744012a5e07" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 06:47:19 crc kubenswrapper[4520]: I0130 06:47:19.862965 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:19 crc kubenswrapper[4520]: E0130 06:47:19.863255 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:20.363243791 +0000 UTC m=+153.991595971 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:19 crc kubenswrapper[4520]: I0130 06:47:19.875199 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-cr54l" event={"ID":"8c17950d-e37b-477d-87d9-d3a92b487ff3","Type":"ContainerStarted","Data":"07bc631f3439d03c9e302352fbfa1dde16a6054b730c476cbdde92010355cf11"} Jan 30 06:47:19 crc kubenswrapper[4520]: I0130 06:47:19.875229 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-cr54l" event={"ID":"8c17950d-e37b-477d-87d9-d3a92b487ff3","Type":"ContainerStarted","Data":"68f418408853ef946df26532cbe10c67c1b5a1faee3807d876045bd8cd7cac50"} Jan 30 06:47:19 crc kubenswrapper[4520]: I0130 06:47:19.877262 4520 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fd76j container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 30 06:47:19 crc kubenswrapper[4520]: I0130 06:47:19.877288 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fd76j" podUID="7d200a37-0276-4e2c-b7ef-98107be3f313" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 30 06:47:19 crc kubenswrapper[4520]: I0130 06:47:19.964539 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:19 crc kubenswrapper[4520]: E0130 06:47:19.966293 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:20.466279873 +0000 UTC m=+154.094632054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.065596 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:20 crc kubenswrapper[4520]: E0130 06:47:20.065737 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:20.565719717 +0000 UTC m=+154.194071898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.065831 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:20 crc kubenswrapper[4520]: E0130 06:47:20.066152 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:20.566145228 +0000 UTC m=+154.194497408 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.119827 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hgth8"] Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.120639 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hgth8" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.143915 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.163795 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hgth8"] Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.166684 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:20 crc kubenswrapper[4520]: E0130 06:47:20.166780 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:20.666768249 +0000 UTC m=+154.295120429 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.166936 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1186824d-c461-481a-aad1-1e0672b8bcab-catalog-content\") pod \"certified-operators-hgth8\" (UID: \"1186824d-c461-481a-aad1-1e0672b8bcab\") " pod="openshift-marketplace/certified-operators-hgth8" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.166960 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmjvb\" (UniqueName: \"kubernetes.io/projected/1186824d-c461-481a-aad1-1e0672b8bcab-kube-api-access-pmjvb\") pod \"certified-operators-hgth8\" (UID: \"1186824d-c461-481a-aad1-1e0672b8bcab\") " pod="openshift-marketplace/certified-operators-hgth8" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.167021 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.167135 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1186824d-c461-481a-aad1-1e0672b8bcab-utilities\") pod \"certified-operators-hgth8\" (UID: \"1186824d-c461-481a-aad1-1e0672b8bcab\") " pod="openshift-marketplace/certified-operators-hgth8" Jan 30 06:47:20 crc kubenswrapper[4520]: E0130 06:47:20.167414 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:20.66740702 +0000 UTC m=+154.295759201 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.267784 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.267952 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1186824d-c461-481a-aad1-1e0672b8bcab-utilities\") pod \"certified-operators-hgth8\" (UID: \"1186824d-c461-481a-aad1-1e0672b8bcab\") " pod="openshift-marketplace/certified-operators-hgth8" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.268064 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1186824d-c461-481a-aad1-1e0672b8bcab-catalog-content\") pod \"certified-operators-hgth8\" (UID: \"1186824d-c461-481a-aad1-1e0672b8bcab\") " pod="openshift-marketplace/certified-operators-hgth8" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.268085 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmjvb\" (UniqueName: \"kubernetes.io/projected/1186824d-c461-481a-aad1-1e0672b8bcab-kube-api-access-pmjvb\") pod \"certified-operators-hgth8\" (UID: \"1186824d-c461-481a-aad1-1e0672b8bcab\") " pod="openshift-marketplace/certified-operators-hgth8" Jan 30 06:47:20 crc kubenswrapper[4520]: E0130 06:47:20.268461 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:20.76844886 +0000 UTC m=+154.396801040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.268781 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1186824d-c461-481a-aad1-1e0672b8bcab-utilities\") pod \"certified-operators-hgth8\" (UID: \"1186824d-c461-481a-aad1-1e0672b8bcab\") " pod="openshift-marketplace/certified-operators-hgth8" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.268984 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1186824d-c461-481a-aad1-1e0672b8bcab-catalog-content\") pod \"certified-operators-hgth8\" (UID: \"1186824d-c461-481a-aad1-1e0672b8bcab\") " pod="openshift-marketplace/certified-operators-hgth8" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.282723 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.308964 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zm96m"] Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.309807 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zm96m" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.315013 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.336331 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmjvb\" (UniqueName: \"kubernetes.io/projected/1186824d-c461-481a-aad1-1e0672b8bcab-kube-api-access-pmjvb\") pod \"certified-operators-hgth8\" (UID: \"1186824d-c461-481a-aad1-1e0672b8bcab\") " pod="openshift-marketplace/certified-operators-hgth8" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.338540 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zm96m"] Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.369639 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d813745-1351-4573-a0ee-7fd8e3332c6e-utilities\") pod \"community-operators-zm96m\" (UID: \"1d813745-1351-4573-a0ee-7fd8e3332c6e\") " pod="openshift-marketplace/community-operators-zm96m" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.369695 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.369714 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb8gs\" (UniqueName: \"kubernetes.io/projected/1d813745-1351-4573-a0ee-7fd8e3332c6e-kube-api-access-sb8gs\") pod \"community-operators-zm96m\" (UID: \"1d813745-1351-4573-a0ee-7fd8e3332c6e\") " pod="openshift-marketplace/community-operators-zm96m" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.369745 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d813745-1351-4573-a0ee-7fd8e3332c6e-catalog-content\") pod \"community-operators-zm96m\" (UID: \"1d813745-1351-4573-a0ee-7fd8e3332c6e\") " pod="openshift-marketplace/community-operators-zm96m" Jan 30 06:47:20 crc kubenswrapper[4520]: E0130 06:47:20.369955 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:20.869941066 +0000 UTC m=+154.498293247 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.432453 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hgth8" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.471013 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:20 crc kubenswrapper[4520]: E0130 06:47:20.471210 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:20.971188221 +0000 UTC m=+154.599540403 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.471251 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d813745-1351-4573-a0ee-7fd8e3332c6e-utilities\") pod \"community-operators-zm96m\" (UID: \"1d813745-1351-4573-a0ee-7fd8e3332c6e\") " pod="openshift-marketplace/community-operators-zm96m" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.471312 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.471329 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb8gs\" (UniqueName: \"kubernetes.io/projected/1d813745-1351-4573-a0ee-7fd8e3332c6e-kube-api-access-sb8gs\") pod \"community-operators-zm96m\" (UID: \"1d813745-1351-4573-a0ee-7fd8e3332c6e\") " pod="openshift-marketplace/community-operators-zm96m" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.471359 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d813745-1351-4573-a0ee-7fd8e3332c6e-catalog-content\") pod \"community-operators-zm96m\" (UID: \"1d813745-1351-4573-a0ee-7fd8e3332c6e\") " pod="openshift-marketplace/community-operators-zm96m" Jan 30 06:47:20 crc kubenswrapper[4520]: E0130 06:47:20.471627 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:20.971619333 +0000 UTC m=+154.599971514 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.472085 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d813745-1351-4573-a0ee-7fd8e3332c6e-utilities\") pod \"community-operators-zm96m\" (UID: \"1d813745-1351-4573-a0ee-7fd8e3332c6e\") " pod="openshift-marketplace/community-operators-zm96m" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.472133 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d813745-1351-4573-a0ee-7fd8e3332c6e-catalog-content\") pod \"community-operators-zm96m\" (UID: \"1d813745-1351-4573-a0ee-7fd8e3332c6e\") " pod="openshift-marketplace/community-operators-zm96m" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.495910 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b5sch"] Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.496659 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b5sch" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.497753 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb8gs\" (UniqueName: \"kubernetes.io/projected/1d813745-1351-4573-a0ee-7fd8e3332c6e-kube-api-access-sb8gs\") pod \"community-operators-zm96m\" (UID: \"1d813745-1351-4573-a0ee-7fd8e3332c6e\") " pod="openshift-marketplace/community-operators-zm96m" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.517930 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b5sch"] Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.536648 4520 patch_prober.go:28] interesting pod/router-default-5444994796-z67kf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 06:47:20 crc kubenswrapper[4520]: [-]has-synced failed: reason withheld Jan 30 06:47:20 crc kubenswrapper[4520]: [+]process-running ok Jan 30 06:47:20 crc kubenswrapper[4520]: healthz check failed Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.536686 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z67kf" podUID="a7229bd1-5891-4654-ad14-c0efed77e9b7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.571926 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:20 crc kubenswrapper[4520]: E0130 06:47:20.572051 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:21.072035875 +0000 UTC m=+154.700388046 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.572166 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40fa3317-086a-4e6e-bc50-3d267cb056f9-catalog-content\") pod \"certified-operators-b5sch\" (UID: \"40fa3317-086a-4e6e-bc50-3d267cb056f9\") " pod="openshift-marketplace/certified-operators-b5sch" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.572317 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40fa3317-086a-4e6e-bc50-3d267cb056f9-utilities\") pod \"certified-operators-b5sch\" (UID: \"40fa3317-086a-4e6e-bc50-3d267cb056f9\") " pod="openshift-marketplace/certified-operators-b5sch" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.572438 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.572459 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvmx6\" (UniqueName: \"kubernetes.io/projected/40fa3317-086a-4e6e-bc50-3d267cb056f9-kube-api-access-fvmx6\") pod \"certified-operators-b5sch\" (UID: \"40fa3317-086a-4e6e-bc50-3d267cb056f9\") " pod="openshift-marketplace/certified-operators-b5sch" Jan 30 06:47:20 crc kubenswrapper[4520]: E0130 06:47:20.572709 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:21.072687071 +0000 UTC m=+154.701039252 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.632652 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zm96m" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.672982 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.673415 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40fa3317-086a-4e6e-bc50-3d267cb056f9-utilities\") pod \"certified-operators-b5sch\" (UID: \"40fa3317-086a-4e6e-bc50-3d267cb056f9\") " pod="openshift-marketplace/certified-operators-b5sch" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.673493 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvmx6\" (UniqueName: \"kubernetes.io/projected/40fa3317-086a-4e6e-bc50-3d267cb056f9-kube-api-access-fvmx6\") pod \"certified-operators-b5sch\" (UID: \"40fa3317-086a-4e6e-bc50-3d267cb056f9\") " pod="openshift-marketplace/certified-operators-b5sch" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.673532 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40fa3317-086a-4e6e-bc50-3d267cb056f9-catalog-content\") pod \"certified-operators-b5sch\" (UID: \"40fa3317-086a-4e6e-bc50-3d267cb056f9\") " pod="openshift-marketplace/certified-operators-b5sch" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.673894 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40fa3317-086a-4e6e-bc50-3d267cb056f9-catalog-content\") pod \"certified-operators-b5sch\" (UID: \"40fa3317-086a-4e6e-bc50-3d267cb056f9\") " pod="openshift-marketplace/certified-operators-b5sch" Jan 30 06:47:20 crc kubenswrapper[4520]: E0130 06:47:20.673954 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:21.173942582 +0000 UTC m=+154.802294753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.674131 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40fa3317-086a-4e6e-bc50-3d267cb056f9-utilities\") pod \"certified-operators-b5sch\" (UID: \"40fa3317-086a-4e6e-bc50-3d267cb056f9\") " pod="openshift-marketplace/certified-operators-b5sch" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.725453 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvmx6\" (UniqueName: \"kubernetes.io/projected/40fa3317-086a-4e6e-bc50-3d267cb056f9-kube-api-access-fvmx6\") pod \"certified-operators-b5sch\" (UID: \"40fa3317-086a-4e6e-bc50-3d267cb056f9\") " pod="openshift-marketplace/certified-operators-b5sch" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.729858 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kcz8t"] Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.730810 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kcz8t" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.751060 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kcz8t"] Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.774581 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ebd5875-2b47-4f0d-b8ad-15709cff81b9-catalog-content\") pod \"community-operators-kcz8t\" (UID: \"6ebd5875-2b47-4f0d-b8ad-15709cff81b9\") " pod="openshift-marketplace/community-operators-kcz8t" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.774635 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.774654 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-656hr\" (UniqueName: \"kubernetes.io/projected/6ebd5875-2b47-4f0d-b8ad-15709cff81b9-kube-api-access-656hr\") pod \"community-operators-kcz8t\" (UID: \"6ebd5875-2b47-4f0d-b8ad-15709cff81b9\") " pod="openshift-marketplace/community-operators-kcz8t" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.774689 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ebd5875-2b47-4f0d-b8ad-15709cff81b9-utilities\") pod \"community-operators-kcz8t\" (UID: \"6ebd5875-2b47-4f0d-b8ad-15709cff81b9\") " pod="openshift-marketplace/community-operators-kcz8t" Jan 30 06:47:20 crc kubenswrapper[4520]: E0130 06:47:20.774920 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:21.274910291 +0000 UTC m=+154.903262472 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.815203 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b5sch" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.876629 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.876763 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ebd5875-2b47-4f0d-b8ad-15709cff81b9-catalog-content\") pod \"community-operators-kcz8t\" (UID: \"6ebd5875-2b47-4f0d-b8ad-15709cff81b9\") " pod="openshift-marketplace/community-operators-kcz8t" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.876802 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-656hr\" (UniqueName: \"kubernetes.io/projected/6ebd5875-2b47-4f0d-b8ad-15709cff81b9-kube-api-access-656hr\") pod \"community-operators-kcz8t\" (UID: \"6ebd5875-2b47-4f0d-b8ad-15709cff81b9\") " pod="openshift-marketplace/community-operators-kcz8t" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.876833 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ebd5875-2b47-4f0d-b8ad-15709cff81b9-utilities\") pod \"community-operators-kcz8t\" (UID: \"6ebd5875-2b47-4f0d-b8ad-15709cff81b9\") " pod="openshift-marketplace/community-operators-kcz8t" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.877164 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ebd5875-2b47-4f0d-b8ad-15709cff81b9-utilities\") pod \"community-operators-kcz8t\" (UID: \"6ebd5875-2b47-4f0d-b8ad-15709cff81b9\") " pod="openshift-marketplace/community-operators-kcz8t" Jan 30 06:47:20 crc kubenswrapper[4520]: E0130 06:47:20.877219 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:21.377207312 +0000 UTC m=+155.005559493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.877392 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ebd5875-2b47-4f0d-b8ad-15709cff81b9-catalog-content\") pod \"community-operators-kcz8t\" (UID: \"6ebd5875-2b47-4f0d-b8ad-15709cff81b9\") " pod="openshift-marketplace/community-operators-kcz8t" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.916155 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-656hr\" (UniqueName: \"kubernetes.io/projected/6ebd5875-2b47-4f0d-b8ad-15709cff81b9-kube-api-access-656hr\") pod \"community-operators-kcz8t\" (UID: \"6ebd5875-2b47-4f0d-b8ad-15709cff81b9\") " pod="openshift-marketplace/community-operators-kcz8t" Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.956596 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-cr54l" event={"ID":"8c17950d-e37b-477d-87d9-d3a92b487ff3","Type":"ContainerStarted","Data":"f55092f1ad662b9eb6d2304ad0973706ed070feb902c5fe27f7349244bf49854"} Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.977970 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:20 crc kubenswrapper[4520]: E0130 06:47:20.979185 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:21.479171829 +0000 UTC m=+155.107524010 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:20 crc kubenswrapper[4520]: I0130 06:47:20.982996 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-cr54l" podStartSLOduration=10.982985878000001 podStartE2EDuration="10.982985878s" podCreationTimestamp="2026-01-30 06:47:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:20.98088801 +0000 UTC m=+154.609240190" watchObservedRunningTime="2026-01-30 06:47:20.982985878 +0000 UTC m=+154.611338059" Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.005170 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hgth8"] Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.065064 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kcz8t" Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.081081 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:21 crc kubenswrapper[4520]: E0130 06:47:21.081346 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:21.581334117 +0000 UTC m=+155.209686298 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.170041 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zm96m"] Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.183021 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:21 crc kubenswrapper[4520]: E0130 06:47:21.183277 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:21.683267413 +0000 UTC m=+155.311619585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.183329 4520 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.284029 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:21 crc kubenswrapper[4520]: E0130 06:47:21.284613 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:21.78460061 +0000 UTC m=+155.412952791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.322134 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b5sch"] Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.385751 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:21 crc kubenswrapper[4520]: E0130 06:47:21.386047 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:21.886036011 +0000 UTC m=+155.514388191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.486043 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:21 crc kubenswrapper[4520]: E0130 06:47:21.486257 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:21.986244761 +0000 UTC m=+155.614596942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.498690 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kcz8t"] Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.533876 4520 patch_prober.go:28] interesting pod/router-default-5444994796-z67kf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 06:47:21 crc kubenswrapper[4520]: [-]has-synced failed: reason withheld Jan 30 06:47:21 crc kubenswrapper[4520]: [+]process-running ok Jan 30 06:47:21 crc kubenswrapper[4520]: healthz check failed Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.533913 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z67kf" podUID="a7229bd1-5891-4654-ad14-c0efed77e9b7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.587010 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:21 crc kubenswrapper[4520]: E0130 06:47:21.587279 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:22.087268776 +0000 UTC m=+155.715620958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.687731 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:21 crc kubenswrapper[4520]: E0130 06:47:21.687885 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:22.187866499 +0000 UTC m=+155.816218681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.688080 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:21 crc kubenswrapper[4520]: E0130 06:47:21.688346 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:22.188339229 +0000 UTC m=+155.816691410 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.778285 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.779130 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.782704 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.783762 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.789009 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:21 crc kubenswrapper[4520]: E0130 06:47:21.789207 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 06:47:22.28919623 +0000 UTC m=+155.917548410 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.789275 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a3cc1ad-797f-4d4f-81b1-06476d91ec43-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1a3cc1ad-797f-4d4f-81b1-06476d91ec43\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.789357 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.789432 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a3cc1ad-797f-4d4f-81b1-06476d91ec43-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1a3cc1ad-797f-4d4f-81b1-06476d91ec43\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 06:47:21 crc kubenswrapper[4520]: E0130 06:47:21.789704 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 06:47:22.289694277 +0000 UTC m=+155.918046458 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-54cnn" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.792643 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.838236 4520 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-30T06:47:21.183344388Z","Handler":null,"Name":""} Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.840303 4520 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.840333 4520 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.890775 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.891010 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a3cc1ad-797f-4d4f-81b1-06476d91ec43-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1a3cc1ad-797f-4d4f-81b1-06476d91ec43\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.891096 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a3cc1ad-797f-4d4f-81b1-06476d91ec43-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1a3cc1ad-797f-4d4f-81b1-06476d91ec43\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.891164 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a3cc1ad-797f-4d4f-81b1-06476d91ec43-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1a3cc1ad-797f-4d4f-81b1-06476d91ec43\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.902635 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.907727 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a3cc1ad-797f-4d4f-81b1-06476d91ec43-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1a3cc1ad-797f-4d4f-81b1-06476d91ec43\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.960231 4520 generic.go:334] "Generic (PLEG): container finished" podID="40fa3317-086a-4e6e-bc50-3d267cb056f9" containerID="5d510149510700d8d090edf5a83b97424586b09f82eebc2e9fb8ff0c0841276b" exitCode=0 Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.960283 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5sch" event={"ID":"40fa3317-086a-4e6e-bc50-3d267cb056f9","Type":"ContainerDied","Data":"5d510149510700d8d090edf5a83b97424586b09f82eebc2e9fb8ff0c0841276b"} Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.960306 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5sch" event={"ID":"40fa3317-086a-4e6e-bc50-3d267cb056f9","Type":"ContainerStarted","Data":"9ec85465118480f897c2d9bc7099284254e7c730623d4c27a2195dc9a7b8b6be"} Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.961708 4520 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.963988 4520 generic.go:334] "Generic (PLEG): container finished" podID="6ebd5875-2b47-4f0d-b8ad-15709cff81b9" containerID="d57738dde15e351845d0efc6289e547e1d2c034f26ddc0aded3c88de38573adf" exitCode=0 Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.964040 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kcz8t" event={"ID":"6ebd5875-2b47-4f0d-b8ad-15709cff81b9","Type":"ContainerDied","Data":"d57738dde15e351845d0efc6289e547e1d2c034f26ddc0aded3c88de38573adf"} Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.964057 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kcz8t" event={"ID":"6ebd5875-2b47-4f0d-b8ad-15709cff81b9","Type":"ContainerStarted","Data":"b2a2cf53806eeadf1f17a28089d40caa84dc4aadbbd38a8e51fbb72c6e5126c2"} Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.965320 4520 generic.go:334] "Generic (PLEG): container finished" podID="350b6a45-2c99-453a-9e85-e97a1adc863d" containerID="a36b9458379423d9fd6eff8752f0459f1728424413b43cc5badf4cfbf94e397b" exitCode=0 Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.965361 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495925-q62ms" event={"ID":"350b6a45-2c99-453a-9e85-e97a1adc863d","Type":"ContainerDied","Data":"a36b9458379423d9fd6eff8752f0459f1728424413b43cc5badf4cfbf94e397b"} Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.966359 4520 generic.go:334] "Generic (PLEG): container finished" podID="1186824d-c461-481a-aad1-1e0672b8bcab" containerID="b7e94e61c2a8064f315a5b44901790d379c7b67c1e3e93742e488093f2614e0d" exitCode=0 Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.966413 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgth8" event={"ID":"1186824d-c461-481a-aad1-1e0672b8bcab","Type":"ContainerDied","Data":"b7e94e61c2a8064f315a5b44901790d379c7b67c1e3e93742e488093f2614e0d"} Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.966433 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgth8" event={"ID":"1186824d-c461-481a-aad1-1e0672b8bcab","Type":"ContainerStarted","Data":"785f6ee841d8b37c019fba4c0c4bd1be68868cb319408154f45601efc638ab5e"} Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.968207 4520 generic.go:334] "Generic (PLEG): container finished" podID="1d813745-1351-4573-a0ee-7fd8e3332c6e" containerID="5daf42e469a3d335f07f26f20ca6ecaa11aaa680b5de0e02585444e0aa84e701" exitCode=0 Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.968886 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zm96m" event={"ID":"1d813745-1351-4573-a0ee-7fd8e3332c6e","Type":"ContainerDied","Data":"5daf42e469a3d335f07f26f20ca6ecaa11aaa680b5de0e02585444e0aa84e701"} Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.968902 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zm96m" event={"ID":"1d813745-1351-4573-a0ee-7fd8e3332c6e","Type":"ContainerStarted","Data":"540b33033c827a2020f996d71972f9215319d92f6f4f49ff3e9cbf6f9d3072a4"} Jan 30 06:47:21 crc kubenswrapper[4520]: I0130 06:47:21.997603 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:22 crc kubenswrapper[4520]: I0130 06:47:22.016569 4520 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 06:47:22 crc kubenswrapper[4520]: I0130 06:47:22.016642 4520 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:22 crc kubenswrapper[4520]: I0130 06:47:22.074281 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-54cnn\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:22 crc kubenswrapper[4520]: I0130 06:47:22.081633 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:22 crc kubenswrapper[4520]: I0130 06:47:22.091980 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 06:47:22 crc kubenswrapper[4520]: I0130 06:47:22.301397 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 06:47:22 crc kubenswrapper[4520]: I0130 06:47:22.324975 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-54cnn"] Jan 30 06:47:22 crc kubenswrapper[4520]: I0130 06:47:22.499533 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q6zxm"] Jan 30 06:47:22 crc kubenswrapper[4520]: I0130 06:47:22.506804 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q6zxm" Jan 30 06:47:22 crc kubenswrapper[4520]: I0130 06:47:22.506942 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q6zxm"] Jan 30 06:47:22 crc kubenswrapper[4520]: I0130 06:47:22.508904 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 06:47:22 crc kubenswrapper[4520]: I0130 06:47:22.533479 4520 patch_prober.go:28] interesting pod/router-default-5444994796-z67kf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 06:47:22 crc kubenswrapper[4520]: [-]has-synced failed: reason withheld Jan 30 06:47:22 crc kubenswrapper[4520]: [+]process-running ok Jan 30 06:47:22 crc kubenswrapper[4520]: healthz check failed Jan 30 06:47:22 crc kubenswrapper[4520]: I0130 06:47:22.533534 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z67kf" podUID="a7229bd1-5891-4654-ad14-c0efed77e9b7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 06:47:22 crc kubenswrapper[4520]: I0130 06:47:22.606769 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7e7a17d-563e-41ac-ba83-9a513203f5cb-catalog-content\") pod \"redhat-marketplace-q6zxm\" (UID: \"f7e7a17d-563e-41ac-ba83-9a513203f5cb\") " pod="openshift-marketplace/redhat-marketplace-q6zxm" Jan 30 06:47:22 crc kubenswrapper[4520]: I0130 06:47:22.606855 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2vfx\" (UniqueName: \"kubernetes.io/projected/f7e7a17d-563e-41ac-ba83-9a513203f5cb-kube-api-access-m2vfx\") pod \"redhat-marketplace-q6zxm\" (UID: \"f7e7a17d-563e-41ac-ba83-9a513203f5cb\") " pod="openshift-marketplace/redhat-marketplace-q6zxm" Jan 30 06:47:22 crc kubenswrapper[4520]: I0130 06:47:22.606906 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7e7a17d-563e-41ac-ba83-9a513203f5cb-utilities\") pod \"redhat-marketplace-q6zxm\" (UID: \"f7e7a17d-563e-41ac-ba83-9a513203f5cb\") " pod="openshift-marketplace/redhat-marketplace-q6zxm" Jan 30 06:47:22 crc kubenswrapper[4520]: I0130 06:47:22.657299 4520 patch_prober.go:28] interesting pod/downloads-7954f5f757-lflpb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 30 06:47:22 crc kubenswrapper[4520]: I0130 06:47:22.657357 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lflpb" podUID="f56326ab-bf4f-43c5-8762-85cb71c93f0a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 30 06:47:22 crc kubenswrapper[4520]: I0130 06:47:22.657606 4520 patch_prober.go:28] interesting pod/downloads-7954f5f757-lflpb container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 30 06:47:22 crc kubenswrapper[4520]: I0130 06:47:22.657729 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-lflpb" podUID="f56326ab-bf4f-43c5-8762-85cb71c93f0a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 30 06:47:22 crc kubenswrapper[4520]: I0130 06:47:22.695440 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 30 06:47:22 crc kubenswrapper[4520]: I0130 06:47:22.709206 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7e7a17d-563e-41ac-ba83-9a513203f5cb-catalog-content\") pod \"redhat-marketplace-q6zxm\" (UID: \"f7e7a17d-563e-41ac-ba83-9a513203f5cb\") " pod="openshift-marketplace/redhat-marketplace-q6zxm" Jan 30 06:47:22 crc kubenswrapper[4520]: I0130 06:47:22.709307 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2vfx\" (UniqueName: \"kubernetes.io/projected/f7e7a17d-563e-41ac-ba83-9a513203f5cb-kube-api-access-m2vfx\") pod \"redhat-marketplace-q6zxm\" (UID: \"f7e7a17d-563e-41ac-ba83-9a513203f5cb\") " pod="openshift-marketplace/redhat-marketplace-q6zxm" Jan 30 06:47:22 crc kubenswrapper[4520]: I0130 06:47:22.709378 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7e7a17d-563e-41ac-ba83-9a513203f5cb-utilities\") pod \"redhat-marketplace-q6zxm\" (UID: \"f7e7a17d-563e-41ac-ba83-9a513203f5cb\") " pod="openshift-marketplace/redhat-marketplace-q6zxm" Jan 30 06:47:22 crc kubenswrapper[4520]: I0130 06:47:22.710317 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7e7a17d-563e-41ac-ba83-9a513203f5cb-utilities\") pod \"redhat-marketplace-q6zxm\" (UID: \"f7e7a17d-563e-41ac-ba83-9a513203f5cb\") " pod="openshift-marketplace/redhat-marketplace-q6zxm" Jan 30 06:47:22 crc kubenswrapper[4520]: I0130 06:47:22.710319 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7e7a17d-563e-41ac-ba83-9a513203f5cb-catalog-content\") pod \"redhat-marketplace-q6zxm\" (UID: \"f7e7a17d-563e-41ac-ba83-9a513203f5cb\") " pod="openshift-marketplace/redhat-marketplace-q6zxm" Jan 30 06:47:22 crc kubenswrapper[4520]: I0130 06:47:22.730598 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2vfx\" (UniqueName: \"kubernetes.io/projected/f7e7a17d-563e-41ac-ba83-9a513203f5cb-kube-api-access-m2vfx\") pod \"redhat-marketplace-q6zxm\" (UID: \"f7e7a17d-563e-41ac-ba83-9a513203f5cb\") " pod="openshift-marketplace/redhat-marketplace-q6zxm" Jan 30 06:47:22 crc kubenswrapper[4520]: I0130 06:47:22.822545 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q6zxm" Jan 30 06:47:22 crc kubenswrapper[4520]: I0130 06:47:22.908895 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j789c"] Jan 30 06:47:22 crc kubenswrapper[4520]: I0130 06:47:22.911991 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j789c" Jan 30 06:47:22 crc kubenswrapper[4520]: I0130 06:47:22.912310 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j789c"] Jan 30 06:47:22 crc kubenswrapper[4520]: I0130 06:47:22.978356 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2vpl2" Jan 30 06:47:22 crc kubenswrapper[4520]: I0130 06:47:22.978798 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2vpl2" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.000906 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2vpl2" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.012740 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53876f72-b696-4749-9677-8aed346a928b-catalog-content\") pod \"redhat-marketplace-j789c\" (UID: \"53876f72-b696-4749-9677-8aed346a928b\") " pod="openshift-marketplace/redhat-marketplace-j789c" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.012799 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53876f72-b696-4749-9677-8aed346a928b-utilities\") pod \"redhat-marketplace-j789c\" (UID: \"53876f72-b696-4749-9677-8aed346a928b\") " pod="openshift-marketplace/redhat-marketplace-j789c" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.012830 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttxzl\" (UniqueName: \"kubernetes.io/projected/53876f72-b696-4749-9677-8aed346a928b-kube-api-access-ttxzl\") pod \"redhat-marketplace-j789c\" (UID: \"53876f72-b696-4749-9677-8aed346a928b\") " pod="openshift-marketplace/redhat-marketplace-j789c" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.036156 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-nkbdc" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.036183 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-nkbdc" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.038694 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" event={"ID":"28a7e740-6b3e-49a1-ac09-f802137f6a84","Type":"ContainerStarted","Data":"45d655d87176c357d8ffd89ce3d037ca4503d27d5b13c51ac8375b2bbf76fdb2"} Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.038737 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" event={"ID":"28a7e740-6b3e-49a1-ac09-f802137f6a84","Type":"ContainerStarted","Data":"534b03bfd48e13702765f71e86687d5a1b2255e4fea140d639a3f97782c2d4a8"} Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.038849 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.043468 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"1a3cc1ad-797f-4d4f-81b1-06476d91ec43","Type":"ContainerStarted","Data":"0d42cd4cd12085005610b56805a4aeb28b2ad7af42aca6e97e236e968767c385"} Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.043490 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"1a3cc1ad-797f-4d4f-81b1-06476d91ec43","Type":"ContainerStarted","Data":"b8555a1374f6887f7ba9aa420431b7c545ae962c12d52009dbd0cb0f9c3b6314"} Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.045854 4520 patch_prober.go:28] interesting pod/console-f9d7485db-nkbdc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.045907 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-nkbdc" podUID="d3fdb20f-d725-45b1-9825-8c2b6f6fd24b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.075300 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" podStartSLOduration=135.075290171 podStartE2EDuration="2m15.075290171s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:23.057760202 +0000 UTC m=+156.686112383" watchObservedRunningTime="2026-01-30 06:47:23.075290171 +0000 UTC m=+156.703642352" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.078003 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.078676 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.088728 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.111440 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.111429763 podStartE2EDuration="2.111429763s" podCreationTimestamp="2026-01-30 06:47:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:23.075899108 +0000 UTC m=+156.704251289" watchObservedRunningTime="2026-01-30 06:47:23.111429763 +0000 UTC m=+156.739781944" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.116154 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53876f72-b696-4749-9677-8aed346a928b-catalog-content\") pod \"redhat-marketplace-j789c\" (UID: \"53876f72-b696-4749-9677-8aed346a928b\") " pod="openshift-marketplace/redhat-marketplace-j789c" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.116204 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53876f72-b696-4749-9677-8aed346a928b-utilities\") pod \"redhat-marketplace-j789c\" (UID: \"53876f72-b696-4749-9677-8aed346a928b\") " pod="openshift-marketplace/redhat-marketplace-j789c" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.116225 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttxzl\" (UniqueName: \"kubernetes.io/projected/53876f72-b696-4749-9677-8aed346a928b-kube-api-access-ttxzl\") pod \"redhat-marketplace-j789c\" (UID: \"53876f72-b696-4749-9677-8aed346a928b\") " pod="openshift-marketplace/redhat-marketplace-j789c" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.117702 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53876f72-b696-4749-9677-8aed346a928b-catalog-content\") pod \"redhat-marketplace-j789c\" (UID: \"53876f72-b696-4749-9677-8aed346a928b\") " pod="openshift-marketplace/redhat-marketplace-j789c" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.117751 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53876f72-b696-4749-9677-8aed346a928b-utilities\") pod \"redhat-marketplace-j789c\" (UID: \"53876f72-b696-4749-9677-8aed346a928b\") " pod="openshift-marketplace/redhat-marketplace-j789c" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.134884 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttxzl\" (UniqueName: \"kubernetes.io/projected/53876f72-b696-4749-9677-8aed346a928b-kube-api-access-ttxzl\") pod \"redhat-marketplace-j789c\" (UID: \"53876f72-b696-4749-9677-8aed346a928b\") " pod="openshift-marketplace/redhat-marketplace-j789c" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.222472 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-fd76j" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.261399 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j789c" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.292467 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4kzxr"] Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.293731 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4kzxr" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.295973 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.296599 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4kzxr"] Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.412795 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495925-q62ms" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.421195 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/257ef61b-c019-4bea-8449-f5b2f9a27e47-utilities\") pod \"redhat-operators-4kzxr\" (UID: \"257ef61b-c019-4bea-8449-f5b2f9a27e47\") " pod="openshift-marketplace/redhat-operators-4kzxr" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.421255 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtvnz\" (UniqueName: \"kubernetes.io/projected/257ef61b-c019-4bea-8449-f5b2f9a27e47-kube-api-access-xtvnz\") pod \"redhat-operators-4kzxr\" (UID: \"257ef61b-c019-4bea-8449-f5b2f9a27e47\") " pod="openshift-marketplace/redhat-operators-4kzxr" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.421332 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/257ef61b-c019-4bea-8449-f5b2f9a27e47-catalog-content\") pod \"redhat-operators-4kzxr\" (UID: \"257ef61b-c019-4bea-8449-f5b2f9a27e47\") " pod="openshift-marketplace/redhat-operators-4kzxr" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.480016 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q6zxm"] Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.499633 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gqkfw"] Jan 30 06:47:23 crc kubenswrapper[4520]: E0130 06:47:23.499922 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="350b6a45-2c99-453a-9e85-e97a1adc863d" containerName="collect-profiles" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.499934 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="350b6a45-2c99-453a-9e85-e97a1adc863d" containerName="collect-profiles" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.500049 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="350b6a45-2c99-453a-9e85-e97a1adc863d" containerName="collect-profiles" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.500750 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqkfw" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.504651 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gqkfw"] Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.523530 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/350b6a45-2c99-453a-9e85-e97a1adc863d-secret-volume\") pod \"350b6a45-2c99-453a-9e85-e97a1adc863d\" (UID: \"350b6a45-2c99-453a-9e85-e97a1adc863d\") " Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.523582 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgnhr\" (UniqueName: \"kubernetes.io/projected/350b6a45-2c99-453a-9e85-e97a1adc863d-kube-api-access-xgnhr\") pod \"350b6a45-2c99-453a-9e85-e97a1adc863d\" (UID: \"350b6a45-2c99-453a-9e85-e97a1adc863d\") " Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.523657 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/350b6a45-2c99-453a-9e85-e97a1adc863d-config-volume\") pod \"350b6a45-2c99-453a-9e85-e97a1adc863d\" (UID: \"350b6a45-2c99-453a-9e85-e97a1adc863d\") " Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.523935 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/257ef61b-c019-4bea-8449-f5b2f9a27e47-catalog-content\") pod \"redhat-operators-4kzxr\" (UID: \"257ef61b-c019-4bea-8449-f5b2f9a27e47\") " pod="openshift-marketplace/redhat-operators-4kzxr" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.524061 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/257ef61b-c019-4bea-8449-f5b2f9a27e47-utilities\") pod \"redhat-operators-4kzxr\" (UID: \"257ef61b-c019-4bea-8449-f5b2f9a27e47\") " pod="openshift-marketplace/redhat-operators-4kzxr" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.524142 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtvnz\" (UniqueName: \"kubernetes.io/projected/257ef61b-c019-4bea-8449-f5b2f9a27e47-kube-api-access-xtvnz\") pod \"redhat-operators-4kzxr\" (UID: \"257ef61b-c019-4bea-8449-f5b2f9a27e47\") " pod="openshift-marketplace/redhat-operators-4kzxr" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.525231 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/350b6a45-2c99-453a-9e85-e97a1adc863d-config-volume" (OuterVolumeSpecName: "config-volume") pod "350b6a45-2c99-453a-9e85-e97a1adc863d" (UID: "350b6a45-2c99-453a-9e85-e97a1adc863d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.525690 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/257ef61b-c019-4bea-8449-f5b2f9a27e47-catalog-content\") pod \"redhat-operators-4kzxr\" (UID: \"257ef61b-c019-4bea-8449-f5b2f9a27e47\") " pod="openshift-marketplace/redhat-operators-4kzxr" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.525946 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/257ef61b-c019-4bea-8449-f5b2f9a27e47-utilities\") pod \"redhat-operators-4kzxr\" (UID: \"257ef61b-c019-4bea-8449-f5b2f9a27e47\") " pod="openshift-marketplace/redhat-operators-4kzxr" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.530354 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-z67kf" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.533772 4520 patch_prober.go:28] interesting pod/router-default-5444994796-z67kf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 06:47:23 crc kubenswrapper[4520]: [-]has-synced failed: reason withheld Jan 30 06:47:23 crc kubenswrapper[4520]: [+]process-running ok Jan 30 06:47:23 crc kubenswrapper[4520]: healthz check failed Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.533893 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z67kf" podUID="a7229bd1-5891-4654-ad14-c0efed77e9b7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.546198 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtvnz\" (UniqueName: \"kubernetes.io/projected/257ef61b-c019-4bea-8449-f5b2f9a27e47-kube-api-access-xtvnz\") pod \"redhat-operators-4kzxr\" (UID: \"257ef61b-c019-4bea-8449-f5b2f9a27e47\") " pod="openshift-marketplace/redhat-operators-4kzxr" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.546426 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/350b6a45-2c99-453a-9e85-e97a1adc863d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "350b6a45-2c99-453a-9e85-e97a1adc863d" (UID: "350b6a45-2c99-453a-9e85-e97a1adc863d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.548349 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/350b6a45-2c99-453a-9e85-e97a1adc863d-kube-api-access-xgnhr" (OuterVolumeSpecName: "kube-api-access-xgnhr") pod "350b6a45-2c99-453a-9e85-e97a1adc863d" (UID: "350b6a45-2c99-453a-9e85-e97a1adc863d"). InnerVolumeSpecName "kube-api-access-xgnhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.607412 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4kzxr" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.637410 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n88nl\" (UniqueName: \"kubernetes.io/projected/24fc2386-ea09-46c6-a097-f4c302b305b7-kube-api-access-n88nl\") pod \"redhat-operators-gqkfw\" (UID: \"24fc2386-ea09-46c6-a097-f4c302b305b7\") " pod="openshift-marketplace/redhat-operators-gqkfw" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.637561 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24fc2386-ea09-46c6-a097-f4c302b305b7-utilities\") pod \"redhat-operators-gqkfw\" (UID: \"24fc2386-ea09-46c6-a097-f4c302b305b7\") " pod="openshift-marketplace/redhat-operators-gqkfw" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.638321 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24fc2386-ea09-46c6-a097-f4c302b305b7-catalog-content\") pod \"redhat-operators-gqkfw\" (UID: \"24fc2386-ea09-46c6-a097-f4c302b305b7\") " pod="openshift-marketplace/redhat-operators-gqkfw" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.639065 4520 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/350b6a45-2c99-453a-9e85-e97a1adc863d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.639141 4520 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/350b6a45-2c99-453a-9e85-e97a1adc863d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.639312 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgnhr\" (UniqueName: \"kubernetes.io/projected/350b6a45-2c99-453a-9e85-e97a1adc863d-kube-api-access-xgnhr\") on node \"crc\" DevicePath \"\"" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.683752 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j789c"] Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.740271 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24fc2386-ea09-46c6-a097-f4c302b305b7-utilities\") pod \"redhat-operators-gqkfw\" (UID: \"24fc2386-ea09-46c6-a097-f4c302b305b7\") " pod="openshift-marketplace/redhat-operators-gqkfw" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.740307 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24fc2386-ea09-46c6-a097-f4c302b305b7-catalog-content\") pod \"redhat-operators-gqkfw\" (UID: \"24fc2386-ea09-46c6-a097-f4c302b305b7\") " pod="openshift-marketplace/redhat-operators-gqkfw" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.740349 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n88nl\" (UniqueName: \"kubernetes.io/projected/24fc2386-ea09-46c6-a097-f4c302b305b7-kube-api-access-n88nl\") pod \"redhat-operators-gqkfw\" (UID: \"24fc2386-ea09-46c6-a097-f4c302b305b7\") " pod="openshift-marketplace/redhat-operators-gqkfw" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.742327 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24fc2386-ea09-46c6-a097-f4c302b305b7-catalog-content\") pod \"redhat-operators-gqkfw\" (UID: \"24fc2386-ea09-46c6-a097-f4c302b305b7\") " pod="openshift-marketplace/redhat-operators-gqkfw" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.742447 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24fc2386-ea09-46c6-a097-f4c302b305b7-utilities\") pod \"redhat-operators-gqkfw\" (UID: \"24fc2386-ea09-46c6-a097-f4c302b305b7\") " pod="openshift-marketplace/redhat-operators-gqkfw" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.762710 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n88nl\" (UniqueName: \"kubernetes.io/projected/24fc2386-ea09-46c6-a097-f4c302b305b7-kube-api-access-n88nl\") pod \"redhat-operators-gqkfw\" (UID: \"24fc2386-ea09-46c6-a097-f4c302b305b7\") " pod="openshift-marketplace/redhat-operators-gqkfw" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.824557 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqkfw" Jan 30 06:47:23 crc kubenswrapper[4520]: I0130 06:47:23.912746 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4kzxr"] Jan 30 06:47:23 crc kubenswrapper[4520]: W0130 06:47:23.945504 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod257ef61b_c019_4bea_8449_f5b2f9a27e47.slice/crio-eee2c81f31ba9368eef578fcf980d1e4f5223129c5c78c6a521a2bdf226925a6 WatchSource:0}: Error finding container eee2c81f31ba9368eef578fcf980d1e4f5223129c5c78c6a521a2bdf226925a6: Status 404 returned error can't find the container with id eee2c81f31ba9368eef578fcf980d1e4f5223129c5c78c6a521a2bdf226925a6 Jan 30 06:47:24 crc kubenswrapper[4520]: I0130 06:47:24.117177 4520 generic.go:334] "Generic (PLEG): container finished" podID="53876f72-b696-4749-9677-8aed346a928b" containerID="b70f3d9e328245b73e28e2e3c25933c0f8d24a4638aa8dcee8c7b8a1371543cc" exitCode=0 Jan 30 06:47:24 crc kubenswrapper[4520]: I0130 06:47:24.117270 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j789c" event={"ID":"53876f72-b696-4749-9677-8aed346a928b","Type":"ContainerDied","Data":"b70f3d9e328245b73e28e2e3c25933c0f8d24a4638aa8dcee8c7b8a1371543cc"} Jan 30 06:47:24 crc kubenswrapper[4520]: I0130 06:47:24.117862 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j789c" event={"ID":"53876f72-b696-4749-9677-8aed346a928b","Type":"ContainerStarted","Data":"f9e16647f30dbce40234aad8346d5b8ccc3f6a9d1735cecdeb29f0e5eefa522d"} Jan 30 06:47:24 crc kubenswrapper[4520]: I0130 06:47:24.129200 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495925-q62ms" event={"ID":"350b6a45-2c99-453a-9e85-e97a1adc863d","Type":"ContainerDied","Data":"20fde1c31d9fdb12e9c4c73dee4c6a50945c9239b4e7532525aeeff45e713d60"} Jan 30 06:47:24 crc kubenswrapper[4520]: I0130 06:47:24.129245 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20fde1c31d9fdb12e9c4c73dee4c6a50945c9239b4e7532525aeeff45e713d60" Jan 30 06:47:24 crc kubenswrapper[4520]: I0130 06:47:24.129329 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495925-q62ms" Jan 30 06:47:24 crc kubenswrapper[4520]: I0130 06:47:24.133996 4520 generic.go:334] "Generic (PLEG): container finished" podID="f7e7a17d-563e-41ac-ba83-9a513203f5cb" containerID="8e880cb5422b892e84ce554866c531bf390013b29368b8487eeb4c5a9f16b468" exitCode=0 Jan 30 06:47:24 crc kubenswrapper[4520]: I0130 06:47:24.134050 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q6zxm" event={"ID":"f7e7a17d-563e-41ac-ba83-9a513203f5cb","Type":"ContainerDied","Data":"8e880cb5422b892e84ce554866c531bf390013b29368b8487eeb4c5a9f16b468"} Jan 30 06:47:24 crc kubenswrapper[4520]: I0130 06:47:24.134072 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q6zxm" event={"ID":"f7e7a17d-563e-41ac-ba83-9a513203f5cb","Type":"ContainerStarted","Data":"4cb600f7ef803b2ad9b6c559662233d2fab32ee402f584f138f423e2ec6a7d50"} Jan 30 06:47:24 crc kubenswrapper[4520]: I0130 06:47:24.136627 4520 generic.go:334] "Generic (PLEG): container finished" podID="1a3cc1ad-797f-4d4f-81b1-06476d91ec43" containerID="0d42cd4cd12085005610b56805a4aeb28b2ad7af42aca6e97e236e968767c385" exitCode=0 Jan 30 06:47:24 crc kubenswrapper[4520]: I0130 06:47:24.136669 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"1a3cc1ad-797f-4d4f-81b1-06476d91ec43","Type":"ContainerDied","Data":"0d42cd4cd12085005610b56805a4aeb28b2ad7af42aca6e97e236e968767c385"} Jan 30 06:47:24 crc kubenswrapper[4520]: I0130 06:47:24.139107 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4kzxr" event={"ID":"257ef61b-c019-4bea-8449-f5b2f9a27e47","Type":"ContainerStarted","Data":"eee2c81f31ba9368eef578fcf980d1e4f5223129c5c78c6a521a2bdf226925a6"} Jan 30 06:47:24 crc kubenswrapper[4520]: I0130 06:47:24.146885 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-hzv4j" Jan 30 06:47:24 crc kubenswrapper[4520]: I0130 06:47:24.150771 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2vpl2" Jan 30 06:47:24 crc kubenswrapper[4520]: I0130 06:47:24.234666 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gqkfw"] Jan 30 06:47:24 crc kubenswrapper[4520]: I0130 06:47:24.540596 4520 patch_prober.go:28] interesting pod/router-default-5444994796-z67kf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 06:47:24 crc kubenswrapper[4520]: [-]has-synced failed: reason withheld Jan 30 06:47:24 crc kubenswrapper[4520]: [+]process-running ok Jan 30 06:47:24 crc kubenswrapper[4520]: healthz check failed Jan 30 06:47:24 crc kubenswrapper[4520]: I0130 06:47:24.540649 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z67kf" podUID="a7229bd1-5891-4654-ad14-c0efed77e9b7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 06:47:25 crc kubenswrapper[4520]: I0130 06:47:25.157566 4520 generic.go:334] "Generic (PLEG): container finished" podID="257ef61b-c019-4bea-8449-f5b2f9a27e47" containerID="7fd20d2c083e9ff5eb2dd1f0670a8b0abd7ebe9092e520ad37a47b753f0155d5" exitCode=0 Jan 30 06:47:25 crc kubenswrapper[4520]: I0130 06:47:25.158096 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4kzxr" event={"ID":"257ef61b-c019-4bea-8449-f5b2f9a27e47","Type":"ContainerDied","Data":"7fd20d2c083e9ff5eb2dd1f0670a8b0abd7ebe9092e520ad37a47b753f0155d5"} Jan 30 06:47:25 crc kubenswrapper[4520]: I0130 06:47:25.166084 4520 generic.go:334] "Generic (PLEG): container finished" podID="24fc2386-ea09-46c6-a097-f4c302b305b7" containerID="96c4405e18a89e8276a4045748385fcc27298f51e5e284c49e133e76d02425aa" exitCode=0 Jan 30 06:47:25 crc kubenswrapper[4520]: I0130 06:47:25.166793 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqkfw" event={"ID":"24fc2386-ea09-46c6-a097-f4c302b305b7","Type":"ContainerDied","Data":"96c4405e18a89e8276a4045748385fcc27298f51e5e284c49e133e76d02425aa"} Jan 30 06:47:25 crc kubenswrapper[4520]: I0130 06:47:25.166809 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqkfw" event={"ID":"24fc2386-ea09-46c6-a097-f4c302b305b7","Type":"ContainerStarted","Data":"bc5b16b6dfff5d457b785abb865334247ae2e79949ccbffb5c49ab738aa20b25"} Jan 30 06:47:25 crc kubenswrapper[4520]: I0130 06:47:25.254269 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-bd6fq" Jan 30 06:47:25 crc kubenswrapper[4520]: I0130 06:47:25.541358 4520 patch_prober.go:28] interesting pod/router-default-5444994796-z67kf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 06:47:25 crc kubenswrapper[4520]: [-]has-synced failed: reason withheld Jan 30 06:47:25 crc kubenswrapper[4520]: [+]process-running ok Jan 30 06:47:25 crc kubenswrapper[4520]: healthz check failed Jan 30 06:47:25 crc kubenswrapper[4520]: I0130 06:47:25.541442 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z67kf" podUID="a7229bd1-5891-4654-ad14-c0efed77e9b7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 06:47:25 crc kubenswrapper[4520]: I0130 06:47:25.556906 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 06:47:25 crc kubenswrapper[4520]: I0130 06:47:25.681191 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a3cc1ad-797f-4d4f-81b1-06476d91ec43-kube-api-access\") pod \"1a3cc1ad-797f-4d4f-81b1-06476d91ec43\" (UID: \"1a3cc1ad-797f-4d4f-81b1-06476d91ec43\") " Jan 30 06:47:25 crc kubenswrapper[4520]: I0130 06:47:25.681274 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a3cc1ad-797f-4d4f-81b1-06476d91ec43-kubelet-dir\") pod \"1a3cc1ad-797f-4d4f-81b1-06476d91ec43\" (UID: \"1a3cc1ad-797f-4d4f-81b1-06476d91ec43\") " Jan 30 06:47:25 crc kubenswrapper[4520]: I0130 06:47:25.681540 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a3cc1ad-797f-4d4f-81b1-06476d91ec43-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1a3cc1ad-797f-4d4f-81b1-06476d91ec43" (UID: "1a3cc1ad-797f-4d4f-81b1-06476d91ec43"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:47:25 crc kubenswrapper[4520]: I0130 06:47:25.704443 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a3cc1ad-797f-4d4f-81b1-06476d91ec43-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1a3cc1ad-797f-4d4f-81b1-06476d91ec43" (UID: "1a3cc1ad-797f-4d4f-81b1-06476d91ec43"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:47:25 crc kubenswrapper[4520]: I0130 06:47:25.783051 4520 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a3cc1ad-797f-4d4f-81b1-06476d91ec43-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 06:47:25 crc kubenswrapper[4520]: I0130 06:47:25.783092 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a3cc1ad-797f-4d4f-81b1-06476d91ec43-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 06:47:26 crc kubenswrapper[4520]: I0130 06:47:26.214775 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 06:47:26 crc kubenswrapper[4520]: I0130 06:47:26.219635 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"1a3cc1ad-797f-4d4f-81b1-06476d91ec43","Type":"ContainerDied","Data":"b8555a1374f6887f7ba9aa420431b7c545ae962c12d52009dbd0cb0f9c3b6314"} Jan 30 06:47:26 crc kubenswrapper[4520]: I0130 06:47:26.219659 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8555a1374f6887f7ba9aa420431b7c545ae962c12d52009dbd0cb0f9c3b6314" Jan 30 06:47:26 crc kubenswrapper[4520]: I0130 06:47:26.534215 4520 patch_prober.go:28] interesting pod/router-default-5444994796-z67kf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 06:47:26 crc kubenswrapper[4520]: [-]has-synced failed: reason withheld Jan 30 06:47:26 crc kubenswrapper[4520]: [+]process-running ok Jan 30 06:47:26 crc kubenswrapper[4520]: healthz check failed Jan 30 06:47:26 crc kubenswrapper[4520]: I0130 06:47:26.534258 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z67kf" podUID="a7229bd1-5891-4654-ad14-c0efed77e9b7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 06:47:27 crc kubenswrapper[4520]: I0130 06:47:27.532412 4520 patch_prober.go:28] interesting pod/router-default-5444994796-z67kf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 06:47:27 crc kubenswrapper[4520]: [-]has-synced failed: reason withheld Jan 30 06:47:27 crc kubenswrapper[4520]: [+]process-running ok Jan 30 06:47:27 crc kubenswrapper[4520]: healthz check failed Jan 30 06:47:27 crc kubenswrapper[4520]: I0130 06:47:27.532456 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z67kf" podUID="a7229bd1-5891-4654-ad14-c0efed77e9b7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 06:47:27 crc kubenswrapper[4520]: I0130 06:47:27.596184 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 06:47:27 crc kubenswrapper[4520]: E0130 06:47:27.596564 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a3cc1ad-797f-4d4f-81b1-06476d91ec43" containerName="pruner" Jan 30 06:47:27 crc kubenswrapper[4520]: I0130 06:47:27.596578 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a3cc1ad-797f-4d4f-81b1-06476d91ec43" containerName="pruner" Jan 30 06:47:27 crc kubenswrapper[4520]: I0130 06:47:27.596670 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a3cc1ad-797f-4d4f-81b1-06476d91ec43" containerName="pruner" Jan 30 06:47:27 crc kubenswrapper[4520]: I0130 06:47:27.596973 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 06:47:27 crc kubenswrapper[4520]: I0130 06:47:27.599078 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 06:47:27 crc kubenswrapper[4520]: I0130 06:47:27.602226 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 06:47:27 crc kubenswrapper[4520]: I0130 06:47:27.605657 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 06:47:27 crc kubenswrapper[4520]: I0130 06:47:27.717268 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a3f3d44-2bd4-4157-ae3b-d8135e94502d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5a3f3d44-2bd4-4157-ae3b-d8135e94502d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 06:47:27 crc kubenswrapper[4520]: I0130 06:47:27.717367 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a3f3d44-2bd4-4157-ae3b-d8135e94502d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5a3f3d44-2bd4-4157-ae3b-d8135e94502d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 06:47:27 crc kubenswrapper[4520]: I0130 06:47:27.793975 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:47:27 crc kubenswrapper[4520]: I0130 06:47:27.794022 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:47:27 crc kubenswrapper[4520]: I0130 06:47:27.818238 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a3f3d44-2bd4-4157-ae3b-d8135e94502d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5a3f3d44-2bd4-4157-ae3b-d8135e94502d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 06:47:27 crc kubenswrapper[4520]: I0130 06:47:27.818292 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a3f3d44-2bd4-4157-ae3b-d8135e94502d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5a3f3d44-2bd4-4157-ae3b-d8135e94502d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 06:47:27 crc kubenswrapper[4520]: I0130 06:47:27.818299 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a3f3d44-2bd4-4157-ae3b-d8135e94502d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5a3f3d44-2bd4-4157-ae3b-d8135e94502d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 06:47:27 crc kubenswrapper[4520]: I0130 06:47:27.832151 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a3f3d44-2bd4-4157-ae3b-d8135e94502d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5a3f3d44-2bd4-4157-ae3b-d8135e94502d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 06:47:27 crc kubenswrapper[4520]: I0130 06:47:27.921687 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 06:47:28 crc kubenswrapper[4520]: I0130 06:47:28.533993 4520 patch_prober.go:28] interesting pod/router-default-5444994796-z67kf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 06:47:28 crc kubenswrapper[4520]: [-]has-synced failed: reason withheld Jan 30 06:47:28 crc kubenswrapper[4520]: [+]process-running ok Jan 30 06:47:28 crc kubenswrapper[4520]: healthz check failed Jan 30 06:47:28 crc kubenswrapper[4520]: I0130 06:47:28.535743 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z67kf" podUID="a7229bd1-5891-4654-ad14-c0efed77e9b7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 06:47:28 crc kubenswrapper[4520]: I0130 06:47:28.539053 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 06:47:28 crc kubenswrapper[4520]: W0130 06:47:28.562658 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5a3f3d44_2bd4_4157_ae3b_d8135e94502d.slice/crio-e7b430a54e2691bb3b6682b3c2f1d1752f7b72318a088ab916ac27bd44e21428 WatchSource:0}: Error finding container e7b430a54e2691bb3b6682b3c2f1d1752f7b72318a088ab916ac27bd44e21428: Status 404 returned error can't find the container with id e7b430a54e2691bb3b6682b3c2f1d1752f7b72318a088ab916ac27bd44e21428 Jan 30 06:47:29 crc kubenswrapper[4520]: I0130 06:47:29.258593 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5a3f3d44-2bd4-4157-ae3b-d8135e94502d","Type":"ContainerStarted","Data":"e7b430a54e2691bb3b6682b3c2f1d1752f7b72318a088ab916ac27bd44e21428"} Jan 30 06:47:29 crc kubenswrapper[4520]: I0130 06:47:29.533129 4520 patch_prober.go:28] interesting pod/router-default-5444994796-z67kf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 06:47:29 crc kubenswrapper[4520]: [-]has-synced failed: reason withheld Jan 30 06:47:29 crc kubenswrapper[4520]: [+]process-running ok Jan 30 06:47:29 crc kubenswrapper[4520]: healthz check failed Jan 30 06:47:29 crc kubenswrapper[4520]: I0130 06:47:29.533197 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z67kf" podUID="a7229bd1-5891-4654-ad14-c0efed77e9b7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 06:47:30 crc kubenswrapper[4520]: I0130 06:47:30.533228 4520 patch_prober.go:28] interesting pod/router-default-5444994796-z67kf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 06:47:30 crc kubenswrapper[4520]: [-]has-synced failed: reason withheld Jan 30 06:47:30 crc kubenswrapper[4520]: [+]process-running ok Jan 30 06:47:30 crc kubenswrapper[4520]: healthz check failed Jan 30 06:47:30 crc kubenswrapper[4520]: I0130 06:47:30.533975 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z67kf" podUID="a7229bd1-5891-4654-ad14-c0efed77e9b7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 06:47:30 crc kubenswrapper[4520]: I0130 06:47:30.559984 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e1a8ebe-5163-47dd-a320-a286c92971c2-metrics-certs\") pod \"network-metrics-daemon-z5rcx\" (UID: \"6e1a8ebe-5163-47dd-a320-a286c92971c2\") " pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:47:30 crc kubenswrapper[4520]: I0130 06:47:30.580238 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e1a8ebe-5163-47dd-a320-a286c92971c2-metrics-certs\") pod \"network-metrics-daemon-z5rcx\" (UID: \"6e1a8ebe-5163-47dd-a320-a286c92971c2\") " pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:47:30 crc kubenswrapper[4520]: I0130 06:47:30.694574 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z5rcx" Jan 30 06:47:31 crc kubenswrapper[4520]: I0130 06:47:31.533559 4520 patch_prober.go:28] interesting pod/router-default-5444994796-z67kf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 06:47:31 crc kubenswrapper[4520]: [-]has-synced failed: reason withheld Jan 30 06:47:31 crc kubenswrapper[4520]: [+]process-running ok Jan 30 06:47:31 crc kubenswrapper[4520]: healthz check failed Jan 30 06:47:31 crc kubenswrapper[4520]: I0130 06:47:31.533825 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z67kf" podUID="a7229bd1-5891-4654-ad14-c0efed77e9b7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 06:47:32 crc kubenswrapper[4520]: I0130 06:47:32.532248 4520 patch_prober.go:28] interesting pod/router-default-5444994796-z67kf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 06:47:32 crc kubenswrapper[4520]: [-]has-synced failed: reason withheld Jan 30 06:47:32 crc kubenswrapper[4520]: [+]process-running ok Jan 30 06:47:32 crc kubenswrapper[4520]: healthz check failed Jan 30 06:47:32 crc kubenswrapper[4520]: I0130 06:47:32.532334 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z67kf" podUID="a7229bd1-5891-4654-ad14-c0efed77e9b7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 06:47:32 crc kubenswrapper[4520]: I0130 06:47:32.664740 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-lflpb" Jan 30 06:47:33 crc kubenswrapper[4520]: I0130 06:47:33.037123 4520 patch_prober.go:28] interesting pod/console-f9d7485db-nkbdc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 30 06:47:33 crc kubenswrapper[4520]: I0130 06:47:33.037177 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-nkbdc" podUID="d3fdb20f-d725-45b1-9825-8c2b6f6fd24b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 30 06:47:33 crc kubenswrapper[4520]: I0130 06:47:33.532814 4520 patch_prober.go:28] interesting pod/router-default-5444994796-z67kf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 06:47:33 crc kubenswrapper[4520]: [-]has-synced failed: reason withheld Jan 30 06:47:33 crc kubenswrapper[4520]: [+]process-running ok Jan 30 06:47:33 crc kubenswrapper[4520]: healthz check failed Jan 30 06:47:33 crc kubenswrapper[4520]: I0130 06:47:33.532880 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z67kf" podUID="a7229bd1-5891-4654-ad14-c0efed77e9b7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 06:47:34 crc kubenswrapper[4520]: I0130 06:47:34.532533 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-z67kf" Jan 30 06:47:34 crc kubenswrapper[4520]: I0130 06:47:34.536565 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-z67kf" Jan 30 06:47:36 crc kubenswrapper[4520]: I0130 06:47:36.035465 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-z5rcx"] Jan 30 06:47:36 crc kubenswrapper[4520]: I0130 06:47:36.335374 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5a3f3d44-2bd4-4157-ae3b-d8135e94502d","Type":"ContainerStarted","Data":"d182f37508451132ead43db232641bdb795dd8a174ba419d98daeab52e0bfc5d"} Jan 30 06:47:36 crc kubenswrapper[4520]: I0130 06:47:36.348922 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=9.348904553 podStartE2EDuration="9.348904553s" podCreationTimestamp="2026-01-30 06:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:36.34886551 +0000 UTC m=+169.977217692" watchObservedRunningTime="2026-01-30 06:47:36.348904553 +0000 UTC m=+169.977256725" Jan 30 06:47:37 crc kubenswrapper[4520]: I0130 06:47:37.343126 4520 generic.go:334] "Generic (PLEG): container finished" podID="5a3f3d44-2bd4-4157-ae3b-d8135e94502d" containerID="d182f37508451132ead43db232641bdb795dd8a174ba419d98daeab52e0bfc5d" exitCode=0 Jan 30 06:47:37 crc kubenswrapper[4520]: I0130 06:47:37.343217 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5a3f3d44-2bd4-4157-ae3b-d8135e94502d","Type":"ContainerDied","Data":"d182f37508451132ead43db232641bdb795dd8a174ba419d98daeab52e0bfc5d"} Jan 30 06:47:42 crc kubenswrapper[4520]: I0130 06:47:42.087778 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:47:43 crc kubenswrapper[4520]: I0130 06:47:43.042474 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-nkbdc" Jan 30 06:47:43 crc kubenswrapper[4520]: I0130 06:47:43.051118 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-nkbdc" Jan 30 06:47:46 crc kubenswrapper[4520]: I0130 06:47:46.308169 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-782cc"] Jan 30 06:47:48 crc kubenswrapper[4520]: I0130 06:47:48.827938 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 06:47:48 crc kubenswrapper[4520]: E0130 06:47:48.866600 4520 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 30 06:47:48 crc kubenswrapper[4520]: E0130 06:47:48.866767 4520 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvmx6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-b5sch_openshift-marketplace(40fa3317-086a-4e6e-bc50-3d267cb056f9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 06:47:48 crc kubenswrapper[4520]: E0130 06:47:48.868151 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-b5sch" podUID="40fa3317-086a-4e6e-bc50-3d267cb056f9" Jan 30 06:47:48 crc kubenswrapper[4520]: E0130 06:47:48.906668 4520 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 30 06:47:48 crc kubenswrapper[4520]: E0130 06:47:48.906780 4520 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-656hr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-kcz8t_openshift-marketplace(6ebd5875-2b47-4f0d-b8ad-15709cff81b9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 06:47:48 crc kubenswrapper[4520]: E0130 06:47:48.908641 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-kcz8t" podUID="6ebd5875-2b47-4f0d-b8ad-15709cff81b9" Jan 30 06:47:48 crc kubenswrapper[4520]: I0130 06:47:48.962771 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a3f3d44-2bd4-4157-ae3b-d8135e94502d-kube-api-access\") pod \"5a3f3d44-2bd4-4157-ae3b-d8135e94502d\" (UID: \"5a3f3d44-2bd4-4157-ae3b-d8135e94502d\") " Jan 30 06:47:48 crc kubenswrapper[4520]: I0130 06:47:48.962839 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a3f3d44-2bd4-4157-ae3b-d8135e94502d-kubelet-dir\") pod \"5a3f3d44-2bd4-4157-ae3b-d8135e94502d\" (UID: \"5a3f3d44-2bd4-4157-ae3b-d8135e94502d\") " Jan 30 06:47:48 crc kubenswrapper[4520]: I0130 06:47:48.963149 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a3f3d44-2bd4-4157-ae3b-d8135e94502d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5a3f3d44-2bd4-4157-ae3b-d8135e94502d" (UID: "5a3f3d44-2bd4-4157-ae3b-d8135e94502d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:47:48 crc kubenswrapper[4520]: I0130 06:47:48.968663 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a3f3d44-2bd4-4157-ae3b-d8135e94502d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5a3f3d44-2bd4-4157-ae3b-d8135e94502d" (UID: "5a3f3d44-2bd4-4157-ae3b-d8135e94502d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:47:49 crc kubenswrapper[4520]: I0130 06:47:49.065898 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a3f3d44-2bd4-4157-ae3b-d8135e94502d-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 06:47:49 crc kubenswrapper[4520]: I0130 06:47:49.065951 4520 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a3f3d44-2bd4-4157-ae3b-d8135e94502d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 06:47:49 crc kubenswrapper[4520]: I0130 06:47:49.418144 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 06:47:49 crc kubenswrapper[4520]: I0130 06:47:49.419413 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5a3f3d44-2bd4-4157-ae3b-d8135e94502d","Type":"ContainerDied","Data":"e7b430a54e2691bb3b6682b3c2f1d1752f7b72318a088ab916ac27bd44e21428"} Jan 30 06:47:49 crc kubenswrapper[4520]: I0130 06:47:49.419492 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7b430a54e2691bb3b6682b3c2f1d1752f7b72318a088ab916ac27bd44e21428" Jan 30 06:47:49 crc kubenswrapper[4520]: I0130 06:47:49.421347 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-z5rcx" event={"ID":"6e1a8ebe-5163-47dd-a320-a286c92971c2","Type":"ContainerStarted","Data":"f1c487fa63d2c76b60c50f78a95fcd1788c71d90d5e580c282e610f8ea444696"} Jan 30 06:47:50 crc kubenswrapper[4520]: E0130 06:47:50.177147 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-b5sch" podUID="40fa3317-086a-4e6e-bc50-3d267cb056f9" Jan 30 06:47:50 crc kubenswrapper[4520]: E0130 06:47:50.177289 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-kcz8t" podUID="6ebd5875-2b47-4f0d-b8ad-15709cff81b9" Jan 30 06:47:50 crc kubenswrapper[4520]: E0130 06:47:50.230191 4520 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 30 06:47:50 crc kubenswrapper[4520]: E0130 06:47:50.230292 4520 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ttxzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-j789c_openshift-marketplace(53876f72-b696-4749-9677-8aed346a928b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 06:47:50 crc kubenswrapper[4520]: E0130 06:47:50.231439 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-j789c" podUID="53876f72-b696-4749-9677-8aed346a928b" Jan 30 06:47:50 crc kubenswrapper[4520]: E0130 06:47:50.291428 4520 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 30 06:47:50 crc kubenswrapper[4520]: E0130 06:47:50.291557 4520 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m2vfx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-q6zxm_openshift-marketplace(f7e7a17d-563e-41ac-ba83-9a513203f5cb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 06:47:50 crc kubenswrapper[4520]: E0130 06:47:50.292729 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-q6zxm" podUID="f7e7a17d-563e-41ac-ba83-9a513203f5cb" Jan 30 06:47:50 crc kubenswrapper[4520]: E0130 06:47:50.430998 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-q6zxm" podUID="f7e7a17d-563e-41ac-ba83-9a513203f5cb" Jan 30 06:47:50 crc kubenswrapper[4520]: E0130 06:47:50.431577 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-j789c" podUID="53876f72-b696-4749-9677-8aed346a928b" Jan 30 06:47:51 crc kubenswrapper[4520]: I0130 06:47:51.437696 4520 generic.go:334] "Generic (PLEG): container finished" podID="257ef61b-c019-4bea-8449-f5b2f9a27e47" containerID="8b7e63ac17122eeb5d84be81ce88bccd43d4dc0b0dc64afc7bb4479502c141db" exitCode=0 Jan 30 06:47:51 crc kubenswrapper[4520]: I0130 06:47:51.437787 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4kzxr" event={"ID":"257ef61b-c019-4bea-8449-f5b2f9a27e47","Type":"ContainerDied","Data":"8b7e63ac17122eeb5d84be81ce88bccd43d4dc0b0dc64afc7bb4479502c141db"} Jan 30 06:47:51 crc kubenswrapper[4520]: I0130 06:47:51.440759 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-z5rcx" event={"ID":"6e1a8ebe-5163-47dd-a320-a286c92971c2","Type":"ContainerStarted","Data":"7fc61b952b3b856478a9477c1643dbd8160f24857cf6772d7dc4edfe850f61d1"} Jan 30 06:47:51 crc kubenswrapper[4520]: I0130 06:47:51.441832 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-z5rcx" event={"ID":"6e1a8ebe-5163-47dd-a320-a286c92971c2","Type":"ContainerStarted","Data":"3b034609141268c7ade95c5c3bb51f5deac8ec2fa974f4a279f4b5554bddeeea"} Jan 30 06:47:51 crc kubenswrapper[4520]: I0130 06:47:51.445692 4520 generic.go:334] "Generic (PLEG): container finished" podID="1186824d-c461-481a-aad1-1e0672b8bcab" containerID="1239fb4b1561a8c3361664d7a10b23cba47b66b09b7047547cfd7544088f96ab" exitCode=0 Jan 30 06:47:51 crc kubenswrapper[4520]: I0130 06:47:51.445737 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgth8" event={"ID":"1186824d-c461-481a-aad1-1e0672b8bcab","Type":"ContainerDied","Data":"1239fb4b1561a8c3361664d7a10b23cba47b66b09b7047547cfd7544088f96ab"} Jan 30 06:47:51 crc kubenswrapper[4520]: I0130 06:47:51.448986 4520 generic.go:334] "Generic (PLEG): container finished" podID="24fc2386-ea09-46c6-a097-f4c302b305b7" containerID="8be741af717b6135a96eac4c4ebfac050feb9775845c0297264168b51f500367" exitCode=0 Jan 30 06:47:51 crc kubenswrapper[4520]: I0130 06:47:51.449107 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqkfw" event={"ID":"24fc2386-ea09-46c6-a097-f4c302b305b7","Type":"ContainerDied","Data":"8be741af717b6135a96eac4c4ebfac050feb9775845c0297264168b51f500367"} Jan 30 06:47:51 crc kubenswrapper[4520]: I0130 06:47:51.453296 4520 generic.go:334] "Generic (PLEG): container finished" podID="1d813745-1351-4573-a0ee-7fd8e3332c6e" containerID="5c68cb236b2bb35179c551ce58b04009fc68a482cf7c683e8c2240b0a065d7d6" exitCode=0 Jan 30 06:47:51 crc kubenswrapper[4520]: I0130 06:47:51.453347 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zm96m" event={"ID":"1d813745-1351-4573-a0ee-7fd8e3332c6e","Type":"ContainerDied","Data":"5c68cb236b2bb35179c551ce58b04009fc68a482cf7c683e8c2240b0a065d7d6"} Jan 30 06:47:51 crc kubenswrapper[4520]: I0130 06:47:51.521200 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-z5rcx" podStartSLOduration=163.521181242 podStartE2EDuration="2m43.521181242s" podCreationTimestamp="2026-01-30 06:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:51.516145402 +0000 UTC m=+185.144497584" watchObservedRunningTime="2026-01-30 06:47:51.521181242 +0000 UTC m=+185.149533423" Jan 30 06:47:52 crc kubenswrapper[4520]: I0130 06:47:52.463213 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4kzxr" event={"ID":"257ef61b-c019-4bea-8449-f5b2f9a27e47","Type":"ContainerStarted","Data":"547770012cb8554b2547cecd1726008581635d431e1266bcf441cfe58ba833f7"} Jan 30 06:47:52 crc kubenswrapper[4520]: I0130 06:47:52.465868 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgth8" event={"ID":"1186824d-c461-481a-aad1-1e0672b8bcab","Type":"ContainerStarted","Data":"62cd30d80eca5132e7221c6a0340d7a489c5badd257cb5f96bc31ad6843830d9"} Jan 30 06:47:52 crc kubenswrapper[4520]: I0130 06:47:52.467707 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqkfw" event={"ID":"24fc2386-ea09-46c6-a097-f4c302b305b7","Type":"ContainerStarted","Data":"333f74796ad225c6dca616ac83028cabdcfffeec5b9c156a7d0699b4a25bf031"} Jan 30 06:47:52 crc kubenswrapper[4520]: I0130 06:47:52.470843 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zm96m" event={"ID":"1d813745-1351-4573-a0ee-7fd8e3332c6e","Type":"ContainerStarted","Data":"51dd7f6e286df9b531aa9d4e6b5e69734f73e74ce5ef50f4c46735fc502c9565"} Jan 30 06:47:52 crc kubenswrapper[4520]: I0130 06:47:52.502194 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4kzxr" podStartSLOduration=2.704910879 podStartE2EDuration="29.502181512s" podCreationTimestamp="2026-01-30 06:47:23 +0000 UTC" firstStartedPulling="2026-01-30 06:47:25.160497104 +0000 UTC m=+158.788849284" lastFinishedPulling="2026-01-30 06:47:51.957767736 +0000 UTC m=+185.586119917" observedRunningTime="2026-01-30 06:47:52.501206727 +0000 UTC m=+186.129558908" watchObservedRunningTime="2026-01-30 06:47:52.502181512 +0000 UTC m=+186.130533693" Jan 30 06:47:52 crc kubenswrapper[4520]: I0130 06:47:52.519007 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gqkfw" podStartSLOduration=2.757546143 podStartE2EDuration="29.518988029s" podCreationTimestamp="2026-01-30 06:47:23 +0000 UTC" firstStartedPulling="2026-01-30 06:47:25.167580457 +0000 UTC m=+158.795932638" lastFinishedPulling="2026-01-30 06:47:51.929022343 +0000 UTC m=+185.557374524" observedRunningTime="2026-01-30 06:47:52.518624856 +0000 UTC m=+186.146977026" watchObservedRunningTime="2026-01-30 06:47:52.518988029 +0000 UTC m=+186.147340211" Jan 30 06:47:52 crc kubenswrapper[4520]: I0130 06:47:52.542790 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zm96m" podStartSLOduration=2.482565591 podStartE2EDuration="32.542774237s" podCreationTimestamp="2026-01-30 06:47:20 +0000 UTC" firstStartedPulling="2026-01-30 06:47:21.969593664 +0000 UTC m=+155.597945844" lastFinishedPulling="2026-01-30 06:47:52.029802309 +0000 UTC m=+185.658154490" observedRunningTime="2026-01-30 06:47:52.540139066 +0000 UTC m=+186.168491246" watchObservedRunningTime="2026-01-30 06:47:52.542774237 +0000 UTC m=+186.171126418" Jan 30 06:47:52 crc kubenswrapper[4520]: I0130 06:47:52.557628 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hgth8" podStartSLOduration=2.6047888219999997 podStartE2EDuration="32.557614092s" podCreationTimestamp="2026-01-30 06:47:20 +0000 UTC" firstStartedPulling="2026-01-30 06:47:21.967048513 +0000 UTC m=+155.595400693" lastFinishedPulling="2026-01-30 06:47:51.919873781 +0000 UTC m=+185.548225963" observedRunningTime="2026-01-30 06:47:52.554780748 +0000 UTC m=+186.183132929" watchObservedRunningTime="2026-01-30 06:47:52.557614092 +0000 UTC m=+186.185966273" Jan 30 06:47:52 crc kubenswrapper[4520]: I0130 06:47:52.903708 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 06:47:53 crc kubenswrapper[4520]: I0130 06:47:53.508553 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nc9qp" Jan 30 06:47:53 crc kubenswrapper[4520]: I0130 06:47:53.608125 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4kzxr" Jan 30 06:47:53 crc kubenswrapper[4520]: I0130 06:47:53.608194 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4kzxr" Jan 30 06:47:53 crc kubenswrapper[4520]: I0130 06:47:53.825093 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gqkfw" Jan 30 06:47:53 crc kubenswrapper[4520]: I0130 06:47:53.825153 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gqkfw" Jan 30 06:47:54 crc kubenswrapper[4520]: I0130 06:47:54.275343 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8jk9c"] Jan 30 06:47:54 crc kubenswrapper[4520]: I0130 06:47:54.275541 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-8jk9c" podUID="dd235a24-175b-4983-980e-2630b3c5b39f" containerName="controller-manager" containerID="cri-o://509ae1a371e95feb26565995e46d7370183a7f57dd1c8b897ed0be107fc0f00a" gracePeriod=30 Jan 30 06:47:54 crc kubenswrapper[4520]: I0130 06:47:54.343809 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqjqj"] Jan 30 06:47:54 crc kubenswrapper[4520]: I0130 06:47:54.343996 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqjqj" podUID="d63d73a7-c813-4983-bccf-805604f7d593" containerName="route-controller-manager" containerID="cri-o://8a6ab591496ffcc19fe10012aeb39bf277ecd6461fbc25e5a0f2ed8e5dfa055d" gracePeriod=30 Jan 30 06:47:54 crc kubenswrapper[4520]: I0130 06:47:54.484898 4520 generic.go:334] "Generic (PLEG): container finished" podID="dd235a24-175b-4983-980e-2630b3c5b39f" containerID="509ae1a371e95feb26565995e46d7370183a7f57dd1c8b897ed0be107fc0f00a" exitCode=0 Jan 30 06:47:54 crc kubenswrapper[4520]: I0130 06:47:54.485349 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8jk9c" event={"ID":"dd235a24-175b-4983-980e-2630b3c5b39f","Type":"ContainerDied","Data":"509ae1a371e95feb26565995e46d7370183a7f57dd1c8b897ed0be107fc0f00a"} Jan 30 06:47:54 crc kubenswrapper[4520]: I0130 06:47:54.488079 4520 generic.go:334] "Generic (PLEG): container finished" podID="d63d73a7-c813-4983-bccf-805604f7d593" containerID="8a6ab591496ffcc19fe10012aeb39bf277ecd6461fbc25e5a0f2ed8e5dfa055d" exitCode=0 Jan 30 06:47:54 crc kubenswrapper[4520]: I0130 06:47:54.488832 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqjqj" event={"ID":"d63d73a7-c813-4983-bccf-805604f7d593","Type":"ContainerDied","Data":"8a6ab591496ffcc19fe10012aeb39bf277ecd6461fbc25e5a0f2ed8e5dfa055d"} Jan 30 06:47:54 crc kubenswrapper[4520]: I0130 06:47:54.679164 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4kzxr" podUID="257ef61b-c019-4bea-8449-f5b2f9a27e47" containerName="registry-server" probeResult="failure" output=< Jan 30 06:47:54 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 06:47:54 crc kubenswrapper[4520]: > Jan 30 06:47:54 crc kubenswrapper[4520]: I0130 06:47:54.845444 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqjqj" Jan 30 06:47:54 crc kubenswrapper[4520]: I0130 06:47:54.877262 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gqkfw" podUID="24fc2386-ea09-46c6-a097-f4c302b305b7" containerName="registry-server" probeResult="failure" output=< Jan 30 06:47:54 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 06:47:54 crc kubenswrapper[4520]: > Jan 30 06:47:54 crc kubenswrapper[4520]: I0130 06:47:54.941188 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8jk9c" Jan 30 06:47:54 crc kubenswrapper[4520]: I0130 06:47:54.956770 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6nc4\" (UniqueName: \"kubernetes.io/projected/d63d73a7-c813-4983-bccf-805604f7d593-kube-api-access-d6nc4\") pod \"d63d73a7-c813-4983-bccf-805604f7d593\" (UID: \"d63d73a7-c813-4983-bccf-805604f7d593\") " Jan 30 06:47:54 crc kubenswrapper[4520]: I0130 06:47:54.956836 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d63d73a7-c813-4983-bccf-805604f7d593-serving-cert\") pod \"d63d73a7-c813-4983-bccf-805604f7d593\" (UID: \"d63d73a7-c813-4983-bccf-805604f7d593\") " Jan 30 06:47:54 crc kubenswrapper[4520]: I0130 06:47:54.956987 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d63d73a7-c813-4983-bccf-805604f7d593-config\") pod \"d63d73a7-c813-4983-bccf-805604f7d593\" (UID: \"d63d73a7-c813-4983-bccf-805604f7d593\") " Jan 30 06:47:54 crc kubenswrapper[4520]: I0130 06:47:54.957060 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d63d73a7-c813-4983-bccf-805604f7d593-client-ca\") pod \"d63d73a7-c813-4983-bccf-805604f7d593\" (UID: \"d63d73a7-c813-4983-bccf-805604f7d593\") " Jan 30 06:47:54 crc kubenswrapper[4520]: I0130 06:47:54.957977 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d63d73a7-c813-4983-bccf-805604f7d593-client-ca" (OuterVolumeSpecName: "client-ca") pod "d63d73a7-c813-4983-bccf-805604f7d593" (UID: "d63d73a7-c813-4983-bccf-805604f7d593"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:47:54 crc kubenswrapper[4520]: I0130 06:47:54.958715 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d63d73a7-c813-4983-bccf-805604f7d593-config" (OuterVolumeSpecName: "config") pod "d63d73a7-c813-4983-bccf-805604f7d593" (UID: "d63d73a7-c813-4983-bccf-805604f7d593"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:47:54 crc kubenswrapper[4520]: I0130 06:47:54.970367 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d63d73a7-c813-4983-bccf-805604f7d593-kube-api-access-d6nc4" (OuterVolumeSpecName: "kube-api-access-d6nc4") pod "d63d73a7-c813-4983-bccf-805604f7d593" (UID: "d63d73a7-c813-4983-bccf-805604f7d593"). InnerVolumeSpecName "kube-api-access-d6nc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:47:54 crc kubenswrapper[4520]: I0130 06:47:54.970386 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d63d73a7-c813-4983-bccf-805604f7d593-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d63d73a7-c813-4983-bccf-805604f7d593" (UID: "d63d73a7-c813-4983-bccf-805604f7d593"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.057829 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd235a24-175b-4983-980e-2630b3c5b39f-proxy-ca-bundles\") pod \"dd235a24-175b-4983-980e-2630b3c5b39f\" (UID: \"dd235a24-175b-4983-980e-2630b3c5b39f\") " Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.057869 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd235a24-175b-4983-980e-2630b3c5b39f-serving-cert\") pod \"dd235a24-175b-4983-980e-2630b3c5b39f\" (UID: \"dd235a24-175b-4983-980e-2630b3c5b39f\") " Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.057911 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd235a24-175b-4983-980e-2630b3c5b39f-config\") pod \"dd235a24-175b-4983-980e-2630b3c5b39f\" (UID: \"dd235a24-175b-4983-980e-2630b3c5b39f\") " Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.057937 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd235a24-175b-4983-980e-2630b3c5b39f-client-ca\") pod \"dd235a24-175b-4983-980e-2630b3c5b39f\" (UID: \"dd235a24-175b-4983-980e-2630b3c5b39f\") " Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.057982 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cs4tc\" (UniqueName: \"kubernetes.io/projected/dd235a24-175b-4983-980e-2630b3c5b39f-kube-api-access-cs4tc\") pod \"dd235a24-175b-4983-980e-2630b3c5b39f\" (UID: \"dd235a24-175b-4983-980e-2630b3c5b39f\") " Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.058179 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d63d73a7-c813-4983-bccf-805604f7d593-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.058191 4520 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d63d73a7-c813-4983-bccf-805604f7d593-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.058200 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6nc4\" (UniqueName: \"kubernetes.io/projected/d63d73a7-c813-4983-bccf-805604f7d593-kube-api-access-d6nc4\") on node \"crc\" DevicePath \"\"" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.058212 4520 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d63d73a7-c813-4983-bccf-805604f7d593-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.058730 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd235a24-175b-4983-980e-2630b3c5b39f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "dd235a24-175b-4983-980e-2630b3c5b39f" (UID: "dd235a24-175b-4983-980e-2630b3c5b39f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.059059 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd235a24-175b-4983-980e-2630b3c5b39f-config" (OuterVolumeSpecName: "config") pod "dd235a24-175b-4983-980e-2630b3c5b39f" (UID: "dd235a24-175b-4983-980e-2630b3c5b39f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.059089 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd235a24-175b-4983-980e-2630b3c5b39f-client-ca" (OuterVolumeSpecName: "client-ca") pod "dd235a24-175b-4983-980e-2630b3c5b39f" (UID: "dd235a24-175b-4983-980e-2630b3c5b39f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.063130 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd235a24-175b-4983-980e-2630b3c5b39f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dd235a24-175b-4983-980e-2630b3c5b39f" (UID: "dd235a24-175b-4983-980e-2630b3c5b39f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.063290 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd235a24-175b-4983-980e-2630b3c5b39f-kube-api-access-cs4tc" (OuterVolumeSpecName: "kube-api-access-cs4tc") pod "dd235a24-175b-4983-980e-2630b3c5b39f" (UID: "dd235a24-175b-4983-980e-2630b3c5b39f"). InnerVolumeSpecName "kube-api-access-cs4tc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.158779 4520 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd235a24-175b-4983-980e-2630b3c5b39f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.158809 4520 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd235a24-175b-4983-980e-2630b3c5b39f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.158821 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd235a24-175b-4983-980e-2630b3c5b39f-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.158829 4520 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd235a24-175b-4983-980e-2630b3c5b39f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.158837 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cs4tc\" (UniqueName: \"kubernetes.io/projected/dd235a24-175b-4983-980e-2630b3c5b39f-kube-api-access-cs4tc\") on node \"crc\" DevicePath \"\"" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.432417 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-658bbd69-jpq96"] Jan 30 06:47:55 crc kubenswrapper[4520]: E0130 06:47:55.433478 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a3f3d44-2bd4-4157-ae3b-d8135e94502d" containerName="pruner" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.433505 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a3f3d44-2bd4-4157-ae3b-d8135e94502d" containerName="pruner" Jan 30 06:47:55 crc kubenswrapper[4520]: E0130 06:47:55.433579 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd235a24-175b-4983-980e-2630b3c5b39f" containerName="controller-manager" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.433596 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd235a24-175b-4983-980e-2630b3c5b39f" containerName="controller-manager" Jan 30 06:47:55 crc kubenswrapper[4520]: E0130 06:47:55.433611 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d63d73a7-c813-4983-bccf-805604f7d593" containerName="route-controller-manager" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.433621 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="d63d73a7-c813-4983-bccf-805604f7d593" containerName="route-controller-manager" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.433811 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="d63d73a7-c813-4983-bccf-805604f7d593" containerName="route-controller-manager" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.433830 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a3f3d44-2bd4-4157-ae3b-d8135e94502d" containerName="pruner" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.433848 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd235a24-175b-4983-980e-2630b3c5b39f" containerName="controller-manager" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.434678 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6dbcfd8d67-xphrz"] Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.434841 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-658bbd69-jpq96" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.435423 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dbcfd8d67-xphrz" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.445874 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-658bbd69-jpq96"] Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.447739 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6dbcfd8d67-xphrz"] Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.498080 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8jk9c" event={"ID":"dd235a24-175b-4983-980e-2630b3c5b39f","Type":"ContainerDied","Data":"03df071c64f6cdcf64ba51826a5cff0a863da13e6ccc617943adefc81c874ad5"} Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.498113 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8jk9c" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.498167 4520 scope.go:117] "RemoveContainer" containerID="509ae1a371e95feb26565995e46d7370183a7f57dd1c8b897ed0be107fc0f00a" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.501104 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqjqj" event={"ID":"d63d73a7-c813-4983-bccf-805604f7d593","Type":"ContainerDied","Data":"21552e408d7c3ebafd95db380175ddb5ed7f87a0b09e79d8a6dddee1e8745898"} Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.501272 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqjqj" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.528983 4520 scope.go:117] "RemoveContainer" containerID="8a6ab591496ffcc19fe10012aeb39bf277ecd6461fbc25e5a0f2ed8e5dfa055d" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.547245 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8jk9c"] Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.561639 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8jk9c"] Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.562396 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deef6f4c-cf38-4133-9417-0bc7a3c999da-serving-cert\") pod \"route-controller-manager-658bbd69-jpq96\" (UID: \"deef6f4c-cf38-4133-9417-0bc7a3c999da\") " pod="openshift-route-controller-manager/route-controller-manager-658bbd69-jpq96" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.562490 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/deef6f4c-cf38-4133-9417-0bc7a3c999da-client-ca\") pod \"route-controller-manager-658bbd69-jpq96\" (UID: \"deef6f4c-cf38-4133-9417-0bc7a3c999da\") " pod="openshift-route-controller-manager/route-controller-manager-658bbd69-jpq96" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.562595 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfvgh\" (UniqueName: \"kubernetes.io/projected/deef6f4c-cf38-4133-9417-0bc7a3c999da-kube-api-access-pfvgh\") pod \"route-controller-manager-658bbd69-jpq96\" (UID: \"deef6f4c-cf38-4133-9417-0bc7a3c999da\") " pod="openshift-route-controller-manager/route-controller-manager-658bbd69-jpq96" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.562667 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/167050d8-bab9-46b3-8fd1-8c5355c15653-proxy-ca-bundles\") pod \"controller-manager-6dbcfd8d67-xphrz\" (UID: \"167050d8-bab9-46b3-8fd1-8c5355c15653\") " pod="openshift-controller-manager/controller-manager-6dbcfd8d67-xphrz" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.562799 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/167050d8-bab9-46b3-8fd1-8c5355c15653-serving-cert\") pod \"controller-manager-6dbcfd8d67-xphrz\" (UID: \"167050d8-bab9-46b3-8fd1-8c5355c15653\") " pod="openshift-controller-manager/controller-manager-6dbcfd8d67-xphrz" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.562880 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/167050d8-bab9-46b3-8fd1-8c5355c15653-client-ca\") pod \"controller-manager-6dbcfd8d67-xphrz\" (UID: \"167050d8-bab9-46b3-8fd1-8c5355c15653\") " pod="openshift-controller-manager/controller-manager-6dbcfd8d67-xphrz" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.562937 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/167050d8-bab9-46b3-8fd1-8c5355c15653-config\") pod \"controller-manager-6dbcfd8d67-xphrz\" (UID: \"167050d8-bab9-46b3-8fd1-8c5355c15653\") " pod="openshift-controller-manager/controller-manager-6dbcfd8d67-xphrz" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.563012 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx6r6\" (UniqueName: \"kubernetes.io/projected/167050d8-bab9-46b3-8fd1-8c5355c15653-kube-api-access-kx6r6\") pod \"controller-manager-6dbcfd8d67-xphrz\" (UID: \"167050d8-bab9-46b3-8fd1-8c5355c15653\") " pod="openshift-controller-manager/controller-manager-6dbcfd8d67-xphrz" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.563089 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deef6f4c-cf38-4133-9417-0bc7a3c999da-config\") pod \"route-controller-manager-658bbd69-jpq96\" (UID: \"deef6f4c-cf38-4133-9417-0bc7a3c999da\") " pod="openshift-route-controller-manager/route-controller-manager-658bbd69-jpq96" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.565201 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqjqj"] Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.567387 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqjqj"] Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.664657 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/167050d8-bab9-46b3-8fd1-8c5355c15653-config\") pod \"controller-manager-6dbcfd8d67-xphrz\" (UID: \"167050d8-bab9-46b3-8fd1-8c5355c15653\") " pod="openshift-controller-manager/controller-manager-6dbcfd8d67-xphrz" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.664703 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx6r6\" (UniqueName: \"kubernetes.io/projected/167050d8-bab9-46b3-8fd1-8c5355c15653-kube-api-access-kx6r6\") pod \"controller-manager-6dbcfd8d67-xphrz\" (UID: \"167050d8-bab9-46b3-8fd1-8c5355c15653\") " pod="openshift-controller-manager/controller-manager-6dbcfd8d67-xphrz" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.664736 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deef6f4c-cf38-4133-9417-0bc7a3c999da-config\") pod \"route-controller-manager-658bbd69-jpq96\" (UID: \"deef6f4c-cf38-4133-9417-0bc7a3c999da\") " pod="openshift-route-controller-manager/route-controller-manager-658bbd69-jpq96" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.664803 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deef6f4c-cf38-4133-9417-0bc7a3c999da-serving-cert\") pod \"route-controller-manager-658bbd69-jpq96\" (UID: \"deef6f4c-cf38-4133-9417-0bc7a3c999da\") " pod="openshift-route-controller-manager/route-controller-manager-658bbd69-jpq96" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.664827 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/deef6f4c-cf38-4133-9417-0bc7a3c999da-client-ca\") pod \"route-controller-manager-658bbd69-jpq96\" (UID: \"deef6f4c-cf38-4133-9417-0bc7a3c999da\") " pod="openshift-route-controller-manager/route-controller-manager-658bbd69-jpq96" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.664861 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfvgh\" (UniqueName: \"kubernetes.io/projected/deef6f4c-cf38-4133-9417-0bc7a3c999da-kube-api-access-pfvgh\") pod \"route-controller-manager-658bbd69-jpq96\" (UID: \"deef6f4c-cf38-4133-9417-0bc7a3c999da\") " pod="openshift-route-controller-manager/route-controller-manager-658bbd69-jpq96" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.664888 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/167050d8-bab9-46b3-8fd1-8c5355c15653-proxy-ca-bundles\") pod \"controller-manager-6dbcfd8d67-xphrz\" (UID: \"167050d8-bab9-46b3-8fd1-8c5355c15653\") " pod="openshift-controller-manager/controller-manager-6dbcfd8d67-xphrz" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.664912 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/167050d8-bab9-46b3-8fd1-8c5355c15653-serving-cert\") pod \"controller-manager-6dbcfd8d67-xphrz\" (UID: \"167050d8-bab9-46b3-8fd1-8c5355c15653\") " pod="openshift-controller-manager/controller-manager-6dbcfd8d67-xphrz" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.664936 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/167050d8-bab9-46b3-8fd1-8c5355c15653-client-ca\") pod \"controller-manager-6dbcfd8d67-xphrz\" (UID: \"167050d8-bab9-46b3-8fd1-8c5355c15653\") " pod="openshift-controller-manager/controller-manager-6dbcfd8d67-xphrz" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.666491 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deef6f4c-cf38-4133-9417-0bc7a3c999da-config\") pod \"route-controller-manager-658bbd69-jpq96\" (UID: \"deef6f4c-cf38-4133-9417-0bc7a3c999da\") " pod="openshift-route-controller-manager/route-controller-manager-658bbd69-jpq96" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.666979 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/deef6f4c-cf38-4133-9417-0bc7a3c999da-client-ca\") pod \"route-controller-manager-658bbd69-jpq96\" (UID: \"deef6f4c-cf38-4133-9417-0bc7a3c999da\") " pod="openshift-route-controller-manager/route-controller-manager-658bbd69-jpq96" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.667884 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/167050d8-bab9-46b3-8fd1-8c5355c15653-client-ca\") pod \"controller-manager-6dbcfd8d67-xphrz\" (UID: \"167050d8-bab9-46b3-8fd1-8c5355c15653\") " pod="openshift-controller-manager/controller-manager-6dbcfd8d67-xphrz" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.668580 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/167050d8-bab9-46b3-8fd1-8c5355c15653-proxy-ca-bundles\") pod \"controller-manager-6dbcfd8d67-xphrz\" (UID: \"167050d8-bab9-46b3-8fd1-8c5355c15653\") " pod="openshift-controller-manager/controller-manager-6dbcfd8d67-xphrz" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.671717 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deef6f4c-cf38-4133-9417-0bc7a3c999da-serving-cert\") pod \"route-controller-manager-658bbd69-jpq96\" (UID: \"deef6f4c-cf38-4133-9417-0bc7a3c999da\") " pod="openshift-route-controller-manager/route-controller-manager-658bbd69-jpq96" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.675475 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/167050d8-bab9-46b3-8fd1-8c5355c15653-serving-cert\") pod \"controller-manager-6dbcfd8d67-xphrz\" (UID: \"167050d8-bab9-46b3-8fd1-8c5355c15653\") " pod="openshift-controller-manager/controller-manager-6dbcfd8d67-xphrz" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.675947 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/167050d8-bab9-46b3-8fd1-8c5355c15653-config\") pod \"controller-manager-6dbcfd8d67-xphrz\" (UID: \"167050d8-bab9-46b3-8fd1-8c5355c15653\") " pod="openshift-controller-manager/controller-manager-6dbcfd8d67-xphrz" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.684418 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx6r6\" (UniqueName: \"kubernetes.io/projected/167050d8-bab9-46b3-8fd1-8c5355c15653-kube-api-access-kx6r6\") pod \"controller-manager-6dbcfd8d67-xphrz\" (UID: \"167050d8-bab9-46b3-8fd1-8c5355c15653\") " pod="openshift-controller-manager/controller-manager-6dbcfd8d67-xphrz" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.686031 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfvgh\" (UniqueName: \"kubernetes.io/projected/deef6f4c-cf38-4133-9417-0bc7a3c999da-kube-api-access-pfvgh\") pod \"route-controller-manager-658bbd69-jpq96\" (UID: \"deef6f4c-cf38-4133-9417-0bc7a3c999da\") " pod="openshift-route-controller-manager/route-controller-manager-658bbd69-jpq96" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.753009 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-658bbd69-jpq96" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.759697 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dbcfd8d67-xphrz" Jan 30 06:47:55 crc kubenswrapper[4520]: I0130 06:47:55.979019 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6dbcfd8d67-xphrz"] Jan 30 06:47:56 crc kubenswrapper[4520]: I0130 06:47:56.000720 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-658bbd69-jpq96"] Jan 30 06:47:56 crc kubenswrapper[4520]: W0130 06:47:56.007695 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddeef6f4c_cf38_4133_9417_0bc7a3c999da.slice/crio-7c36cf86205d710aaeb8af0177a68ffe9518abbbd82f297436d1279e85374ddd WatchSource:0}: Error finding container 7c36cf86205d710aaeb8af0177a68ffe9518abbbd82f297436d1279e85374ddd: Status 404 returned error can't find the container with id 7c36cf86205d710aaeb8af0177a68ffe9518abbbd82f297436d1279e85374ddd Jan 30 06:47:56 crc kubenswrapper[4520]: I0130 06:47:56.506230 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-658bbd69-jpq96" event={"ID":"deef6f4c-cf38-4133-9417-0bc7a3c999da","Type":"ContainerStarted","Data":"3c4ae39e877a9694e81d450d2874aecbb01587370ff5d42d386f18df44e585e4"} Jan 30 06:47:56 crc kubenswrapper[4520]: I0130 06:47:56.506548 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-658bbd69-jpq96" event={"ID":"deef6f4c-cf38-4133-9417-0bc7a3c999da","Type":"ContainerStarted","Data":"7c36cf86205d710aaeb8af0177a68ffe9518abbbd82f297436d1279e85374ddd"} Jan 30 06:47:56 crc kubenswrapper[4520]: I0130 06:47:56.507535 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-658bbd69-jpq96" Jan 30 06:47:56 crc kubenswrapper[4520]: I0130 06:47:56.510911 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dbcfd8d67-xphrz" event={"ID":"167050d8-bab9-46b3-8fd1-8c5355c15653","Type":"ContainerStarted","Data":"7980ccee48cfbc86312d9652acc89d22485fe7f10b4cd446b562af0b2f7a83e8"} Jan 30 06:47:56 crc kubenswrapper[4520]: I0130 06:47:56.510935 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dbcfd8d67-xphrz" event={"ID":"167050d8-bab9-46b3-8fd1-8c5355c15653","Type":"ContainerStarted","Data":"73133593f0884a7747c3cd7c540ef475733749f23b91d4596f7d5a86633fae5b"} Jan 30 06:47:56 crc kubenswrapper[4520]: I0130 06:47:56.511117 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6dbcfd8d67-xphrz" Jan 30 06:47:56 crc kubenswrapper[4520]: I0130 06:47:56.514873 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6dbcfd8d67-xphrz" Jan 30 06:47:56 crc kubenswrapper[4520]: I0130 06:47:56.549713 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-658bbd69-jpq96" podStartSLOduration=2.549702815 podStartE2EDuration="2.549702815s" podCreationTimestamp="2026-01-30 06:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:56.545699147 +0000 UTC m=+190.174051328" watchObservedRunningTime="2026-01-30 06:47:56.549702815 +0000 UTC m=+190.178054985" Jan 30 06:47:56 crc kubenswrapper[4520]: I0130 06:47:56.556743 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-658bbd69-jpq96" Jan 30 06:47:56 crc kubenswrapper[4520]: I0130 06:47:56.582767 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6dbcfd8d67-xphrz" podStartSLOduration=2.582751568 podStartE2EDuration="2.582751568s" podCreationTimestamp="2026-01-30 06:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:56.580124551 +0000 UTC m=+190.208476733" watchObservedRunningTime="2026-01-30 06:47:56.582751568 +0000 UTC m=+190.211103749" Jan 30 06:47:56 crc kubenswrapper[4520]: I0130 06:47:56.692373 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d63d73a7-c813-4983-bccf-805604f7d593" path="/var/lib/kubelet/pods/d63d73a7-c813-4983-bccf-805604f7d593/volumes" Jan 30 06:47:56 crc kubenswrapper[4520]: I0130 06:47:56.692956 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd235a24-175b-4983-980e-2630b3c5b39f" path="/var/lib/kubelet/pods/dd235a24-175b-4983-980e-2630b3c5b39f/volumes" Jan 30 06:47:57 crc kubenswrapper[4520]: I0130 06:47:57.792983 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:47:57 crc kubenswrapper[4520]: I0130 06:47:57.793505 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:48:00 crc kubenswrapper[4520]: I0130 06:48:00.433259 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hgth8" Jan 30 06:48:00 crc kubenswrapper[4520]: I0130 06:48:00.433718 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hgth8" Jan 30 06:48:00 crc kubenswrapper[4520]: I0130 06:48:00.478733 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hgth8" Jan 30 06:48:00 crc kubenswrapper[4520]: I0130 06:48:00.571052 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hgth8" Jan 30 06:48:00 crc kubenswrapper[4520]: I0130 06:48:00.633990 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zm96m" Jan 30 06:48:00 crc kubenswrapper[4520]: I0130 06:48:00.634086 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zm96m" Jan 30 06:48:00 crc kubenswrapper[4520]: I0130 06:48:00.673800 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zm96m" Jan 30 06:48:01 crc kubenswrapper[4520]: I0130 06:48:01.571748 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zm96m" Jan 30 06:48:02 crc kubenswrapper[4520]: I0130 06:48:02.547337 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kcz8t" event={"ID":"6ebd5875-2b47-4f0d-b8ad-15709cff81b9","Type":"ContainerStarted","Data":"1272e7c32f7f975483281d25c118066445f522fa6e5d56f2eeabc35a7724f367"} Jan 30 06:48:03 crc kubenswrapper[4520]: I0130 06:48:03.555849 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5sch" event={"ID":"40fa3317-086a-4e6e-bc50-3d267cb056f9","Type":"ContainerStarted","Data":"4dab9ed5ebee81870f7f3152ec907bbeb10bceda1b02907ddb7a89f4a1e4cfd0"} Jan 30 06:48:03 crc kubenswrapper[4520]: I0130 06:48:03.560161 4520 generic.go:334] "Generic (PLEG): container finished" podID="6ebd5875-2b47-4f0d-b8ad-15709cff81b9" containerID="1272e7c32f7f975483281d25c118066445f522fa6e5d56f2eeabc35a7724f367" exitCode=0 Jan 30 06:48:03 crc kubenswrapper[4520]: I0130 06:48:03.560218 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kcz8t" event={"ID":"6ebd5875-2b47-4f0d-b8ad-15709cff81b9","Type":"ContainerDied","Data":"1272e7c32f7f975483281d25c118066445f522fa6e5d56f2eeabc35a7724f367"} Jan 30 06:48:03 crc kubenswrapper[4520]: I0130 06:48:03.564300 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q6zxm" event={"ID":"f7e7a17d-563e-41ac-ba83-9a513203f5cb","Type":"ContainerStarted","Data":"13de701030ef336c4122d89cfb8ce1f5dc2d5e442a20c41e941818c62770710f"} Jan 30 06:48:03 crc kubenswrapper[4520]: I0130 06:48:03.640340 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4kzxr" Jan 30 06:48:03 crc kubenswrapper[4520]: I0130 06:48:03.673792 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4kzxr" Jan 30 06:48:03 crc kubenswrapper[4520]: I0130 06:48:03.857143 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gqkfw" Jan 30 06:48:03 crc kubenswrapper[4520]: I0130 06:48:03.891649 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gqkfw" Jan 30 06:48:04 crc kubenswrapper[4520]: I0130 06:48:04.571755 4520 generic.go:334] "Generic (PLEG): container finished" podID="f7e7a17d-563e-41ac-ba83-9a513203f5cb" containerID="13de701030ef336c4122d89cfb8ce1f5dc2d5e442a20c41e941818c62770710f" exitCode=0 Jan 30 06:48:04 crc kubenswrapper[4520]: I0130 06:48:04.571840 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q6zxm" event={"ID":"f7e7a17d-563e-41ac-ba83-9a513203f5cb","Type":"ContainerDied","Data":"13de701030ef336c4122d89cfb8ce1f5dc2d5e442a20c41e941818c62770710f"} Jan 30 06:48:04 crc kubenswrapper[4520]: I0130 06:48:04.571893 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q6zxm" event={"ID":"f7e7a17d-563e-41ac-ba83-9a513203f5cb","Type":"ContainerStarted","Data":"635b90a9ef381c4e7c1b942841f1e2b0a87e760ac9c4b2d313f4cb6a1d534c03"} Jan 30 06:48:04 crc kubenswrapper[4520]: I0130 06:48:04.574036 4520 generic.go:334] "Generic (PLEG): container finished" podID="40fa3317-086a-4e6e-bc50-3d267cb056f9" containerID="4dab9ed5ebee81870f7f3152ec907bbeb10bceda1b02907ddb7a89f4a1e4cfd0" exitCode=0 Jan 30 06:48:04 crc kubenswrapper[4520]: I0130 06:48:04.574137 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5sch" event={"ID":"40fa3317-086a-4e6e-bc50-3d267cb056f9","Type":"ContainerDied","Data":"4dab9ed5ebee81870f7f3152ec907bbeb10bceda1b02907ddb7a89f4a1e4cfd0"} Jan 30 06:48:04 crc kubenswrapper[4520]: I0130 06:48:04.576623 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kcz8t" event={"ID":"6ebd5875-2b47-4f0d-b8ad-15709cff81b9","Type":"ContainerStarted","Data":"3f3c389a6a602bb58ffafd71b32dcf9c4e720ebd0a2181926ebf4843e3f15604"} Jan 30 06:48:04 crc kubenswrapper[4520]: I0130 06:48:04.592293 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q6zxm" podStartSLOduration=2.601290479 podStartE2EDuration="42.592277258s" podCreationTimestamp="2026-01-30 06:47:22 +0000 UTC" firstStartedPulling="2026-01-30 06:47:24.135018414 +0000 UTC m=+157.763370595" lastFinishedPulling="2026-01-30 06:48:04.126005193 +0000 UTC m=+197.754357374" observedRunningTime="2026-01-30 06:48:04.59100289 +0000 UTC m=+198.219355070" watchObservedRunningTime="2026-01-30 06:48:04.592277258 +0000 UTC m=+198.220629439" Jan 30 06:48:04 crc kubenswrapper[4520]: I0130 06:48:04.626358 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kcz8t" podStartSLOduration=2.416499854 podStartE2EDuration="44.626343998s" podCreationTimestamp="2026-01-30 06:47:20 +0000 UTC" firstStartedPulling="2026-01-30 06:47:21.964920066 +0000 UTC m=+155.593272247" lastFinishedPulling="2026-01-30 06:48:04.174764211 +0000 UTC m=+197.803116391" observedRunningTime="2026-01-30 06:48:04.625668526 +0000 UTC m=+198.254020708" watchObservedRunningTime="2026-01-30 06:48:04.626343998 +0000 UTC m=+198.254696179" Jan 30 06:48:05 crc kubenswrapper[4520]: I0130 06:48:05.587530 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5sch" event={"ID":"40fa3317-086a-4e6e-bc50-3d267cb056f9","Type":"ContainerStarted","Data":"5938157b24a4a8ea88f5da07f7eac59f20cfc3de61c2d863c9e4c0d442d160b6"} Jan 30 06:48:05 crc kubenswrapper[4520]: I0130 06:48:05.618647 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b5sch" podStartSLOduration=2.408706812 podStartE2EDuration="45.618626815s" podCreationTimestamp="2026-01-30 06:47:20 +0000 UTC" firstStartedPulling="2026-01-30 06:47:21.961470101 +0000 UTC m=+155.589822283" lastFinishedPulling="2026-01-30 06:48:05.171390105 +0000 UTC m=+198.799742286" observedRunningTime="2026-01-30 06:48:05.616051687 +0000 UTC m=+199.244403868" watchObservedRunningTime="2026-01-30 06:48:05.618626815 +0000 UTC m=+199.246978996" Jan 30 06:48:06 crc kubenswrapper[4520]: I0130 06:48:06.595861 4520 generic.go:334] "Generic (PLEG): container finished" podID="53876f72-b696-4749-9677-8aed346a928b" containerID="036fd1b51c3bf37b5cc70b27f1dc987918b1d5798c91ca5ea5b0e17ef235e9ce" exitCode=0 Jan 30 06:48:06 crc kubenswrapper[4520]: I0130 06:48:06.595922 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j789c" event={"ID":"53876f72-b696-4749-9677-8aed346a928b","Type":"ContainerDied","Data":"036fd1b51c3bf37b5cc70b27f1dc987918b1d5798c91ca5ea5b0e17ef235e9ce"} Jan 30 06:48:06 crc kubenswrapper[4520]: I0130 06:48:06.997210 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 06:48:06 crc kubenswrapper[4520]: I0130 06:48:06.997996 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:06.999995 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.000134 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.005329 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.107793 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gqkfw"] Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.108120 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gqkfw" podUID="24fc2386-ea09-46c6-a097-f4c302b305b7" containerName="registry-server" containerID="cri-o://333f74796ad225c6dca616ac83028cabdcfffeec5b9c156a7d0699b4a25bf031" gracePeriod=2 Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.121172 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2c6cdd5-ed01-4fee-b3cf-cdb3b8e78653-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e2c6cdd5-ed01-4fee-b3cf-cdb3b8e78653\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.121352 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2c6cdd5-ed01-4fee-b3cf-cdb3b8e78653-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e2c6cdd5-ed01-4fee-b3cf-cdb3b8e78653\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.222218 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2c6cdd5-ed01-4fee-b3cf-cdb3b8e78653-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e2c6cdd5-ed01-4fee-b3cf-cdb3b8e78653\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.222311 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2c6cdd5-ed01-4fee-b3cf-cdb3b8e78653-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e2c6cdd5-ed01-4fee-b3cf-cdb3b8e78653\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.222400 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2c6cdd5-ed01-4fee-b3cf-cdb3b8e78653-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e2c6cdd5-ed01-4fee-b3cf-cdb3b8e78653\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.240738 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2c6cdd5-ed01-4fee-b3cf-cdb3b8e78653-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e2c6cdd5-ed01-4fee-b3cf-cdb3b8e78653\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.311928 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.484576 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqkfw" Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.606255 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j789c" event={"ID":"53876f72-b696-4749-9677-8aed346a928b","Type":"ContainerStarted","Data":"0bea417c3c11a586f61a8ab91152738955892629315a3971c57e1e5f685fae24"} Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.616272 4520 generic.go:334] "Generic (PLEG): container finished" podID="24fc2386-ea09-46c6-a097-f4c302b305b7" containerID="333f74796ad225c6dca616ac83028cabdcfffeec5b9c156a7d0699b4a25bf031" exitCode=0 Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.616335 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqkfw" event={"ID":"24fc2386-ea09-46c6-a097-f4c302b305b7","Type":"ContainerDied","Data":"333f74796ad225c6dca616ac83028cabdcfffeec5b9c156a7d0699b4a25bf031"} Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.616626 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqkfw" event={"ID":"24fc2386-ea09-46c6-a097-f4c302b305b7","Type":"ContainerDied","Data":"bc5b16b6dfff5d457b785abb865334247ae2e79949ccbffb5c49ab738aa20b25"} Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.616651 4520 scope.go:117] "RemoveContainer" containerID="333f74796ad225c6dca616ac83028cabdcfffeec5b9c156a7d0699b4a25bf031" Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.616775 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqkfw" Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.628874 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n88nl\" (UniqueName: \"kubernetes.io/projected/24fc2386-ea09-46c6-a097-f4c302b305b7-kube-api-access-n88nl\") pod \"24fc2386-ea09-46c6-a097-f4c302b305b7\" (UID: \"24fc2386-ea09-46c6-a097-f4c302b305b7\") " Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.628924 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24fc2386-ea09-46c6-a097-f4c302b305b7-catalog-content\") pod \"24fc2386-ea09-46c6-a097-f4c302b305b7\" (UID: \"24fc2386-ea09-46c6-a097-f4c302b305b7\") " Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.628978 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24fc2386-ea09-46c6-a097-f4c302b305b7-utilities\") pod \"24fc2386-ea09-46c6-a097-f4c302b305b7\" (UID: \"24fc2386-ea09-46c6-a097-f4c302b305b7\") " Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.631654 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j789c" podStartSLOduration=2.566635058 podStartE2EDuration="45.631639827s" podCreationTimestamp="2026-01-30 06:47:22 +0000 UTC" firstStartedPulling="2026-01-30 06:47:24.125577802 +0000 UTC m=+157.753929983" lastFinishedPulling="2026-01-30 06:48:07.190582572 +0000 UTC m=+200.818934752" observedRunningTime="2026-01-30 06:48:07.629822586 +0000 UTC m=+201.258174767" watchObservedRunningTime="2026-01-30 06:48:07.631639827 +0000 UTC m=+201.259992008" Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.633325 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24fc2386-ea09-46c6-a097-f4c302b305b7-utilities" (OuterVolumeSpecName: "utilities") pod "24fc2386-ea09-46c6-a097-f4c302b305b7" (UID: "24fc2386-ea09-46c6-a097-f4c302b305b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.634660 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24fc2386-ea09-46c6-a097-f4c302b305b7-kube-api-access-n88nl" (OuterVolumeSpecName: "kube-api-access-n88nl") pod "24fc2386-ea09-46c6-a097-f4c302b305b7" (UID: "24fc2386-ea09-46c6-a097-f4c302b305b7"). InnerVolumeSpecName "kube-api-access-n88nl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.644798 4520 scope.go:117] "RemoveContainer" containerID="8be741af717b6135a96eac4c4ebfac050feb9775845c0297264168b51f500367" Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.661888 4520 scope.go:117] "RemoveContainer" containerID="96c4405e18a89e8276a4045748385fcc27298f51e5e284c49e133e76d02425aa" Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.679256 4520 scope.go:117] "RemoveContainer" containerID="333f74796ad225c6dca616ac83028cabdcfffeec5b9c156a7d0699b4a25bf031" Jan 30 06:48:07 crc kubenswrapper[4520]: E0130 06:48:07.679740 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"333f74796ad225c6dca616ac83028cabdcfffeec5b9c156a7d0699b4a25bf031\": container with ID starting with 333f74796ad225c6dca616ac83028cabdcfffeec5b9c156a7d0699b4a25bf031 not found: ID does not exist" containerID="333f74796ad225c6dca616ac83028cabdcfffeec5b9c156a7d0699b4a25bf031" Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.679873 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"333f74796ad225c6dca616ac83028cabdcfffeec5b9c156a7d0699b4a25bf031"} err="failed to get container status \"333f74796ad225c6dca616ac83028cabdcfffeec5b9c156a7d0699b4a25bf031\": rpc error: code = NotFound desc = could not find container \"333f74796ad225c6dca616ac83028cabdcfffeec5b9c156a7d0699b4a25bf031\": container with ID starting with 333f74796ad225c6dca616ac83028cabdcfffeec5b9c156a7d0699b4a25bf031 not found: ID does not exist" Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.679988 4520 scope.go:117] "RemoveContainer" containerID="8be741af717b6135a96eac4c4ebfac050feb9775845c0297264168b51f500367" Jan 30 06:48:07 crc kubenswrapper[4520]: E0130 06:48:07.680401 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8be741af717b6135a96eac4c4ebfac050feb9775845c0297264168b51f500367\": container with ID starting with 8be741af717b6135a96eac4c4ebfac050feb9775845c0297264168b51f500367 not found: ID does not exist" containerID="8be741af717b6135a96eac4c4ebfac050feb9775845c0297264168b51f500367" Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.680495 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8be741af717b6135a96eac4c4ebfac050feb9775845c0297264168b51f500367"} err="failed to get container status \"8be741af717b6135a96eac4c4ebfac050feb9775845c0297264168b51f500367\": rpc error: code = NotFound desc = could not find container \"8be741af717b6135a96eac4c4ebfac050feb9775845c0297264168b51f500367\": container with ID starting with 8be741af717b6135a96eac4c4ebfac050feb9775845c0297264168b51f500367 not found: ID does not exist" Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.680592 4520 scope.go:117] "RemoveContainer" containerID="96c4405e18a89e8276a4045748385fcc27298f51e5e284c49e133e76d02425aa" Jan 30 06:48:07 crc kubenswrapper[4520]: E0130 06:48:07.680918 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96c4405e18a89e8276a4045748385fcc27298f51e5e284c49e133e76d02425aa\": container with ID starting with 96c4405e18a89e8276a4045748385fcc27298f51e5e284c49e133e76d02425aa not found: ID does not exist" containerID="96c4405e18a89e8276a4045748385fcc27298f51e5e284c49e133e76d02425aa" Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.681007 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96c4405e18a89e8276a4045748385fcc27298f51e5e284c49e133e76d02425aa"} err="failed to get container status \"96c4405e18a89e8276a4045748385fcc27298f51e5e284c49e133e76d02425aa\": rpc error: code = NotFound desc = could not find container \"96c4405e18a89e8276a4045748385fcc27298f51e5e284c49e133e76d02425aa\": container with ID starting with 96c4405e18a89e8276a4045748385fcc27298f51e5e284c49e133e76d02425aa not found: ID does not exist" Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.724661 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 06:48:07 crc kubenswrapper[4520]: W0130 06:48:07.728954 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode2c6cdd5_ed01_4fee_b3cf_cdb3b8e78653.slice/crio-f9a9f8856a4bc2e1a561467aa8f8d49351f69935d195fb0f022852e871c322e3 WatchSource:0}: Error finding container f9a9f8856a4bc2e1a561467aa8f8d49351f69935d195fb0f022852e871c322e3: Status 404 returned error can't find the container with id f9a9f8856a4bc2e1a561467aa8f8d49351f69935d195fb0f022852e871c322e3 Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.731731 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n88nl\" (UniqueName: \"kubernetes.io/projected/24fc2386-ea09-46c6-a097-f4c302b305b7-kube-api-access-n88nl\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.731881 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24fc2386-ea09-46c6-a097-f4c302b305b7-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.734123 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24fc2386-ea09-46c6-a097-f4c302b305b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24fc2386-ea09-46c6-a097-f4c302b305b7" (UID: "24fc2386-ea09-46c6-a097-f4c302b305b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.833929 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24fc2386-ea09-46c6-a097-f4c302b305b7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.944593 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gqkfw"] Jan 30 06:48:07 crc kubenswrapper[4520]: I0130 06:48:07.947301 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gqkfw"] Jan 30 06:48:08 crc kubenswrapper[4520]: E0130 06:48:08.018063 4520 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24fc2386_ea09_46c6_a097_f4c302b305b7.slice/crio-bc5b16b6dfff5d457b785abb865334247ae2e79949ccbffb5c49ab738aa20b25\": RecentStats: unable to find data in memory cache]" Jan 30 06:48:08 crc kubenswrapper[4520]: I0130 06:48:08.625222 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e2c6cdd5-ed01-4fee-b3cf-cdb3b8e78653","Type":"ContainerStarted","Data":"45c14097e85a1344695dd48a8cba9acc3e4a5982532c2bbdf056c5b73148fcfe"} Jan 30 06:48:08 crc kubenswrapper[4520]: I0130 06:48:08.625286 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e2c6cdd5-ed01-4fee-b3cf-cdb3b8e78653","Type":"ContainerStarted","Data":"f9a9f8856a4bc2e1a561467aa8f8d49351f69935d195fb0f022852e871c322e3"} Jan 30 06:48:08 crc kubenswrapper[4520]: I0130 06:48:08.644103 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.644083048 podStartE2EDuration="2.644083048s" podCreationTimestamp="2026-01-30 06:48:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:48:08.642315941 +0000 UTC m=+202.270668122" watchObservedRunningTime="2026-01-30 06:48:08.644083048 +0000 UTC m=+202.272435229" Jan 30 06:48:08 crc kubenswrapper[4520]: I0130 06:48:08.693149 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24fc2386-ea09-46c6-a097-f4c302b305b7" path="/var/lib/kubelet/pods/24fc2386-ea09-46c6-a097-f4c302b305b7/volumes" Jan 30 06:48:09 crc kubenswrapper[4520]: I0130 06:48:09.632150 4520 generic.go:334] "Generic (PLEG): container finished" podID="e2c6cdd5-ed01-4fee-b3cf-cdb3b8e78653" containerID="45c14097e85a1344695dd48a8cba9acc3e4a5982532c2bbdf056c5b73148fcfe" exitCode=0 Jan 30 06:48:09 crc kubenswrapper[4520]: I0130 06:48:09.632198 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e2c6cdd5-ed01-4fee-b3cf-cdb3b8e78653","Type":"ContainerDied","Data":"45c14097e85a1344695dd48a8cba9acc3e4a5982532c2bbdf056c5b73148fcfe"} Jan 30 06:48:10 crc kubenswrapper[4520]: I0130 06:48:10.816009 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b5sch" Jan 30 06:48:10 crc kubenswrapper[4520]: I0130 06:48:10.816074 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b5sch" Jan 30 06:48:10 crc kubenswrapper[4520]: I0130 06:48:10.852599 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b5sch" Jan 30 06:48:10 crc kubenswrapper[4520]: I0130 06:48:10.921586 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.066443 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kcz8t" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.066754 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kcz8t" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.081595 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2c6cdd5-ed01-4fee-b3cf-cdb3b8e78653-kubelet-dir\") pod \"e2c6cdd5-ed01-4fee-b3cf-cdb3b8e78653\" (UID: \"e2c6cdd5-ed01-4fee-b3cf-cdb3b8e78653\") " Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.081699 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2c6cdd5-ed01-4fee-b3cf-cdb3b8e78653-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e2c6cdd5-ed01-4fee-b3cf-cdb3b8e78653" (UID: "e2c6cdd5-ed01-4fee-b3cf-cdb3b8e78653"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.081710 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2c6cdd5-ed01-4fee-b3cf-cdb3b8e78653-kube-api-access\") pod \"e2c6cdd5-ed01-4fee-b3cf-cdb3b8e78653\" (UID: \"e2c6cdd5-ed01-4fee-b3cf-cdb3b8e78653\") " Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.082282 4520 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2c6cdd5-ed01-4fee-b3cf-cdb3b8e78653-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.089274 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2c6cdd5-ed01-4fee-b3cf-cdb3b8e78653-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e2c6cdd5-ed01-4fee-b3cf-cdb3b8e78653" (UID: "e2c6cdd5-ed01-4fee-b3cf-cdb3b8e78653"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.099938 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kcz8t" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.184095 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2c6cdd5-ed01-4fee-b3cf-cdb3b8e78653-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.367577 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-782cc" podUID="265d9231-d5db-4cdb-80b8-dfd95dffa386" containerName="oauth-openshift" containerID="cri-o://7177b2e882009109fb97b6be4a37c50289504718c10d9d0722d9ebc363b675ce" gracePeriod=15 Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.650366 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e2c6cdd5-ed01-4fee-b3cf-cdb3b8e78653","Type":"ContainerDied","Data":"f9a9f8856a4bc2e1a561467aa8f8d49351f69935d195fb0f022852e871c322e3"} Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.650403 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.650409 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9a9f8856a4bc2e1a561467aa8f8d49351f69935d195fb0f022852e871c322e3" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.652559 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-782cc" event={"ID":"265d9231-d5db-4cdb-80b8-dfd95dffa386","Type":"ContainerDied","Data":"7177b2e882009109fb97b6be4a37c50289504718c10d9d0722d9ebc363b675ce"} Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.652499 4520 generic.go:334] "Generic (PLEG): container finished" podID="265d9231-d5db-4cdb-80b8-dfd95dffa386" containerID="7177b2e882009109fb97b6be4a37c50289504718c10d9d0722d9ebc363b675ce" exitCode=0 Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.690053 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kcz8t" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.692375 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b5sch" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.743154 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.792695 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-trusted-ca-bundle\") pod \"265d9231-d5db-4cdb-80b8-dfd95dffa386\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.792756 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-user-template-provider-selection\") pod \"265d9231-d5db-4cdb-80b8-dfd95dffa386\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.792792 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-router-certs\") pod \"265d9231-d5db-4cdb-80b8-dfd95dffa386\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.793365 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-ocp-branding-template\") pod \"265d9231-d5db-4cdb-80b8-dfd95dffa386\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.793392 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/265d9231-d5db-4cdb-80b8-dfd95dffa386-audit-policies\") pod \"265d9231-d5db-4cdb-80b8-dfd95dffa386\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.793409 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgvmp\" (UniqueName: \"kubernetes.io/projected/265d9231-d5db-4cdb-80b8-dfd95dffa386-kube-api-access-bgvmp\") pod \"265d9231-d5db-4cdb-80b8-dfd95dffa386\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.793427 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-user-template-error\") pod \"265d9231-d5db-4cdb-80b8-dfd95dffa386\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.793440 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-user-idp-0-file-data\") pod \"265d9231-d5db-4cdb-80b8-dfd95dffa386\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.793457 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-serving-cert\") pod \"265d9231-d5db-4cdb-80b8-dfd95dffa386\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.793471 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/265d9231-d5db-4cdb-80b8-dfd95dffa386-audit-dir\") pod \"265d9231-d5db-4cdb-80b8-dfd95dffa386\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.793485 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-user-template-login\") pod \"265d9231-d5db-4cdb-80b8-dfd95dffa386\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.793503 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-cliconfig\") pod \"265d9231-d5db-4cdb-80b8-dfd95dffa386\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.793537 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-service-ca\") pod \"265d9231-d5db-4cdb-80b8-dfd95dffa386\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.793557 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-session\") pod \"265d9231-d5db-4cdb-80b8-dfd95dffa386\" (UID: \"265d9231-d5db-4cdb-80b8-dfd95dffa386\") " Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.793686 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "265d9231-d5db-4cdb-80b8-dfd95dffa386" (UID: "265d9231-d5db-4cdb-80b8-dfd95dffa386"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.793748 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/265d9231-d5db-4cdb-80b8-dfd95dffa386-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "265d9231-d5db-4cdb-80b8-dfd95dffa386" (UID: "265d9231-d5db-4cdb-80b8-dfd95dffa386"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.793983 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/265d9231-d5db-4cdb-80b8-dfd95dffa386-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "265d9231-d5db-4cdb-80b8-dfd95dffa386" (UID: "265d9231-d5db-4cdb-80b8-dfd95dffa386"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.795243 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "265d9231-d5db-4cdb-80b8-dfd95dffa386" (UID: "265d9231-d5db-4cdb-80b8-dfd95dffa386"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.796565 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/265d9231-d5db-4cdb-80b8-dfd95dffa386-kube-api-access-bgvmp" (OuterVolumeSpecName: "kube-api-access-bgvmp") pod "265d9231-d5db-4cdb-80b8-dfd95dffa386" (UID: "265d9231-d5db-4cdb-80b8-dfd95dffa386"). InnerVolumeSpecName "kube-api-access-bgvmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.796748 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "265d9231-d5db-4cdb-80b8-dfd95dffa386" (UID: "265d9231-d5db-4cdb-80b8-dfd95dffa386"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.796746 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "265d9231-d5db-4cdb-80b8-dfd95dffa386" (UID: "265d9231-d5db-4cdb-80b8-dfd95dffa386"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.797087 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "265d9231-d5db-4cdb-80b8-dfd95dffa386" (UID: "265d9231-d5db-4cdb-80b8-dfd95dffa386"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.797221 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "265d9231-d5db-4cdb-80b8-dfd95dffa386" (UID: "265d9231-d5db-4cdb-80b8-dfd95dffa386"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.797471 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "265d9231-d5db-4cdb-80b8-dfd95dffa386" (UID: "265d9231-d5db-4cdb-80b8-dfd95dffa386"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.797586 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "265d9231-d5db-4cdb-80b8-dfd95dffa386" (UID: "265d9231-d5db-4cdb-80b8-dfd95dffa386"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.798435 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "265d9231-d5db-4cdb-80b8-dfd95dffa386" (UID: "265d9231-d5db-4cdb-80b8-dfd95dffa386"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.798733 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "265d9231-d5db-4cdb-80b8-dfd95dffa386" (UID: "265d9231-d5db-4cdb-80b8-dfd95dffa386"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.799252 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "265d9231-d5db-4cdb-80b8-dfd95dffa386" (UID: "265d9231-d5db-4cdb-80b8-dfd95dffa386"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.895021 4520 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/265d9231-d5db-4cdb-80b8-dfd95dffa386-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.895877 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgvmp\" (UniqueName: \"kubernetes.io/projected/265d9231-d5db-4cdb-80b8-dfd95dffa386-kube-api-access-bgvmp\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.895922 4520 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.895935 4520 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.896595 4520 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.896672 4520 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.896742 4520 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/265d9231-d5db-4cdb-80b8-dfd95dffa386-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.896810 4520 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.896866 4520 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.896927 4520 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.896994 4520 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.897050 4520 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.897110 4520 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:11 crc kubenswrapper[4520]: I0130 06:48:11.897176 4520 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/265d9231-d5db-4cdb-80b8-dfd95dffa386-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:12 crc kubenswrapper[4520]: I0130 06:48:12.508681 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kcz8t"] Jan 30 06:48:12 crc kubenswrapper[4520]: I0130 06:48:12.670158 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-782cc" Jan 30 06:48:12 crc kubenswrapper[4520]: I0130 06:48:12.670612 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-782cc" event={"ID":"265d9231-d5db-4cdb-80b8-dfd95dffa386","Type":"ContainerDied","Data":"4e9a2bb94e50ca225544494bb59d454e40935ef2c74911ce39f15e05276e4fcc"} Jan 30 06:48:12 crc kubenswrapper[4520]: I0130 06:48:12.670662 4520 scope.go:117] "RemoveContainer" containerID="7177b2e882009109fb97b6be4a37c50289504718c10d9d0722d9ebc363b675ce" Jan 30 06:48:12 crc kubenswrapper[4520]: I0130 06:48:12.697584 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-782cc"] Jan 30 06:48:12 crc kubenswrapper[4520]: I0130 06:48:12.697622 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-782cc"] Jan 30 06:48:12 crc kubenswrapper[4520]: I0130 06:48:12.823337 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q6zxm" Jan 30 06:48:12 crc kubenswrapper[4520]: I0130 06:48:12.823428 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q6zxm" Jan 30 06:48:12 crc kubenswrapper[4520]: I0130 06:48:12.855914 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q6zxm" Jan 30 06:48:13 crc kubenswrapper[4520]: I0130 06:48:13.261751 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j789c" Jan 30 06:48:13 crc kubenswrapper[4520]: I0130 06:48:13.262096 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j789c" Jan 30 06:48:13 crc kubenswrapper[4520]: I0130 06:48:13.295471 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j789c" Jan 30 06:48:13 crc kubenswrapper[4520]: I0130 06:48:13.507678 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b5sch"] Jan 30 06:48:13 crc kubenswrapper[4520]: I0130 06:48:13.683047 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b5sch" podUID="40fa3317-086a-4e6e-bc50-3d267cb056f9" containerName="registry-server" containerID="cri-o://5938157b24a4a8ea88f5da07f7eac59f20cfc3de61c2d863c9e4c0d442d160b6" gracePeriod=2 Jan 30 06:48:13 crc kubenswrapper[4520]: I0130 06:48:13.683263 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kcz8t" podUID="6ebd5875-2b47-4f0d-b8ad-15709cff81b9" containerName="registry-server" containerID="cri-o://3f3c389a6a602bb58ffafd71b32dcf9c4e720ebd0a2181926ebf4843e3f15604" gracePeriod=2 Jan 30 06:48:13 crc kubenswrapper[4520]: I0130 06:48:13.719280 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j789c" Jan 30 06:48:13 crc kubenswrapper[4520]: I0130 06:48:13.727168 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q6zxm" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.196935 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 06:48:14 crc kubenswrapper[4520]: E0130 06:48:14.197571 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24fc2386-ea09-46c6-a097-f4c302b305b7" containerName="extract-content" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.197603 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="24fc2386-ea09-46c6-a097-f4c302b305b7" containerName="extract-content" Jan 30 06:48:14 crc kubenswrapper[4520]: E0130 06:48:14.197629 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24fc2386-ea09-46c6-a097-f4c302b305b7" containerName="registry-server" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.197641 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="24fc2386-ea09-46c6-a097-f4c302b305b7" containerName="registry-server" Jan 30 06:48:14 crc kubenswrapper[4520]: E0130 06:48:14.197662 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24fc2386-ea09-46c6-a097-f4c302b305b7" containerName="extract-utilities" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.197673 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="24fc2386-ea09-46c6-a097-f4c302b305b7" containerName="extract-utilities" Jan 30 06:48:14 crc kubenswrapper[4520]: E0130 06:48:14.197688 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="265d9231-d5db-4cdb-80b8-dfd95dffa386" containerName="oauth-openshift" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.197698 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="265d9231-d5db-4cdb-80b8-dfd95dffa386" containerName="oauth-openshift" Jan 30 06:48:14 crc kubenswrapper[4520]: E0130 06:48:14.197714 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2c6cdd5-ed01-4fee-b3cf-cdb3b8e78653" containerName="pruner" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.197724 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2c6cdd5-ed01-4fee-b3cf-cdb3b8e78653" containerName="pruner" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.197892 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="24fc2386-ea09-46c6-a097-f4c302b305b7" containerName="registry-server" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.197920 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2c6cdd5-ed01-4fee-b3cf-cdb3b8e78653" containerName="pruner" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.197927 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="265d9231-d5db-4cdb-80b8-dfd95dffa386" containerName="oauth-openshift" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.198313 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.199609 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.201992 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.205607 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b5sch" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.212413 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.227649 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40fa3317-086a-4e6e-bc50-3d267cb056f9-utilities\") pod \"40fa3317-086a-4e6e-bc50-3d267cb056f9\" (UID: \"40fa3317-086a-4e6e-bc50-3d267cb056f9\") " Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.227784 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40fa3317-086a-4e6e-bc50-3d267cb056f9-catalog-content\") pod \"40fa3317-086a-4e6e-bc50-3d267cb056f9\" (UID: \"40fa3317-086a-4e6e-bc50-3d267cb056f9\") " Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.227884 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvmx6\" (UniqueName: \"kubernetes.io/projected/40fa3317-086a-4e6e-bc50-3d267cb056f9-kube-api-access-fvmx6\") pod \"40fa3317-086a-4e6e-bc50-3d267cb056f9\" (UID: \"40fa3317-086a-4e6e-bc50-3d267cb056f9\") " Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.228191 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40fa3317-086a-4e6e-bc50-3d267cb056f9-utilities" (OuterVolumeSpecName: "utilities") pod "40fa3317-086a-4e6e-bc50-3d267cb056f9" (UID: "40fa3317-086a-4e6e-bc50-3d267cb056f9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.228208 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45d3e526-f114-4fc9-8b7c-a77ec3ae6a95-kubelet-dir\") pod \"installer-9-crc\" (UID: \"45d3e526-f114-4fc9-8b7c-a77ec3ae6a95\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.228305 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45d3e526-f114-4fc9-8b7c-a77ec3ae6a95-kube-api-access\") pod \"installer-9-crc\" (UID: \"45d3e526-f114-4fc9-8b7c-a77ec3ae6a95\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.228372 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/45d3e526-f114-4fc9-8b7c-a77ec3ae6a95-var-lock\") pod \"installer-9-crc\" (UID: \"45d3e526-f114-4fc9-8b7c-a77ec3ae6a95\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.228588 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40fa3317-086a-4e6e-bc50-3d267cb056f9-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.233789 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40fa3317-086a-4e6e-bc50-3d267cb056f9-kube-api-access-fvmx6" (OuterVolumeSpecName: "kube-api-access-fvmx6") pod "40fa3317-086a-4e6e-bc50-3d267cb056f9" (UID: "40fa3317-086a-4e6e-bc50-3d267cb056f9"). InnerVolumeSpecName "kube-api-access-fvmx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.270168 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40fa3317-086a-4e6e-bc50-3d267cb056f9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40fa3317-086a-4e6e-bc50-3d267cb056f9" (UID: "40fa3317-086a-4e6e-bc50-3d267cb056f9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.275126 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kcz8t" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.287296 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6dbcfd8d67-xphrz"] Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.287591 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6dbcfd8d67-xphrz" podUID="167050d8-bab9-46b3-8fd1-8c5355c15653" containerName="controller-manager" containerID="cri-o://7980ccee48cfbc86312d9652acc89d22485fe7f10b4cd446b562af0b2f7a83e8" gracePeriod=30 Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.317452 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-658bbd69-jpq96"] Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.317926 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-658bbd69-jpq96" podUID="deef6f4c-cf38-4133-9417-0bc7a3c999da" containerName="route-controller-manager" containerID="cri-o://3c4ae39e877a9694e81d450d2874aecbb01587370ff5d42d386f18df44e585e4" gracePeriod=30 Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.329049 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ebd5875-2b47-4f0d-b8ad-15709cff81b9-catalog-content\") pod \"6ebd5875-2b47-4f0d-b8ad-15709cff81b9\" (UID: \"6ebd5875-2b47-4f0d-b8ad-15709cff81b9\") " Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.329085 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-656hr\" (UniqueName: \"kubernetes.io/projected/6ebd5875-2b47-4f0d-b8ad-15709cff81b9-kube-api-access-656hr\") pod \"6ebd5875-2b47-4f0d-b8ad-15709cff81b9\" (UID: \"6ebd5875-2b47-4f0d-b8ad-15709cff81b9\") " Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.329127 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ebd5875-2b47-4f0d-b8ad-15709cff81b9-utilities\") pod \"6ebd5875-2b47-4f0d-b8ad-15709cff81b9\" (UID: \"6ebd5875-2b47-4f0d-b8ad-15709cff81b9\") " Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.329222 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45d3e526-f114-4fc9-8b7c-a77ec3ae6a95-kubelet-dir\") pod \"installer-9-crc\" (UID: \"45d3e526-f114-4fc9-8b7c-a77ec3ae6a95\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.329270 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45d3e526-f114-4fc9-8b7c-a77ec3ae6a95-kube-api-access\") pod \"installer-9-crc\" (UID: \"45d3e526-f114-4fc9-8b7c-a77ec3ae6a95\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.329296 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/45d3e526-f114-4fc9-8b7c-a77ec3ae6a95-var-lock\") pod \"installer-9-crc\" (UID: \"45d3e526-f114-4fc9-8b7c-a77ec3ae6a95\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.329341 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40fa3317-086a-4e6e-bc50-3d267cb056f9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.329352 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvmx6\" (UniqueName: \"kubernetes.io/projected/40fa3317-086a-4e6e-bc50-3d267cb056f9-kube-api-access-fvmx6\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.329395 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/45d3e526-f114-4fc9-8b7c-a77ec3ae6a95-var-lock\") pod \"installer-9-crc\" (UID: \"45d3e526-f114-4fc9-8b7c-a77ec3ae6a95\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.329583 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45d3e526-f114-4fc9-8b7c-a77ec3ae6a95-kubelet-dir\") pod \"installer-9-crc\" (UID: \"45d3e526-f114-4fc9-8b7c-a77ec3ae6a95\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.334965 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ebd5875-2b47-4f0d-b8ad-15709cff81b9-utilities" (OuterVolumeSpecName: "utilities") pod "6ebd5875-2b47-4f0d-b8ad-15709cff81b9" (UID: "6ebd5875-2b47-4f0d-b8ad-15709cff81b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.351665 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ebd5875-2b47-4f0d-b8ad-15709cff81b9-kube-api-access-656hr" (OuterVolumeSpecName: "kube-api-access-656hr") pod "6ebd5875-2b47-4f0d-b8ad-15709cff81b9" (UID: "6ebd5875-2b47-4f0d-b8ad-15709cff81b9"). InnerVolumeSpecName "kube-api-access-656hr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.366443 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45d3e526-f114-4fc9-8b7c-a77ec3ae6a95-kube-api-access\") pod \"installer-9-crc\" (UID: \"45d3e526-f114-4fc9-8b7c-a77ec3ae6a95\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.420624 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ebd5875-2b47-4f0d-b8ad-15709cff81b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ebd5875-2b47-4f0d-b8ad-15709cff81b9" (UID: "6ebd5875-2b47-4f0d-b8ad-15709cff81b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.431412 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ebd5875-2b47-4f0d-b8ad-15709cff81b9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.431544 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-656hr\" (UniqueName: \"kubernetes.io/projected/6ebd5875-2b47-4f0d-b8ad-15709cff81b9-kube-api-access-656hr\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.431603 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ebd5875-2b47-4f0d-b8ad-15709cff81b9-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.515991 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.690013 4520 generic.go:334] "Generic (PLEG): container finished" podID="167050d8-bab9-46b3-8fd1-8c5355c15653" containerID="7980ccee48cfbc86312d9652acc89d22485fe7f10b4cd446b562af0b2f7a83e8" exitCode=0 Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.692226 4520 generic.go:334] "Generic (PLEG): container finished" podID="deef6f4c-cf38-4133-9417-0bc7a3c999da" containerID="3c4ae39e877a9694e81d450d2874aecbb01587370ff5d42d386f18df44e585e4" exitCode=0 Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.693220 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="265d9231-d5db-4cdb-80b8-dfd95dffa386" path="/var/lib/kubelet/pods/265d9231-d5db-4cdb-80b8-dfd95dffa386/volumes" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.693774 4520 generic.go:334] "Generic (PLEG): container finished" podID="40fa3317-086a-4e6e-bc50-3d267cb056f9" containerID="5938157b24a4a8ea88f5da07f7eac59f20cfc3de61c2d863c9e4c0d442d160b6" exitCode=0 Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.693885 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b5sch" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.694102 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dbcfd8d67-xphrz" event={"ID":"167050d8-bab9-46b3-8fd1-8c5355c15653","Type":"ContainerDied","Data":"7980ccee48cfbc86312d9652acc89d22485fe7f10b4cd446b562af0b2f7a83e8"} Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.694138 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-658bbd69-jpq96" event={"ID":"deef6f4c-cf38-4133-9417-0bc7a3c999da","Type":"ContainerDied","Data":"3c4ae39e877a9694e81d450d2874aecbb01587370ff5d42d386f18df44e585e4"} Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.694176 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5sch" event={"ID":"40fa3317-086a-4e6e-bc50-3d267cb056f9","Type":"ContainerDied","Data":"5938157b24a4a8ea88f5da07f7eac59f20cfc3de61c2d863c9e4c0d442d160b6"} Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.694192 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5sch" event={"ID":"40fa3317-086a-4e6e-bc50-3d267cb056f9","Type":"ContainerDied","Data":"9ec85465118480f897c2d9bc7099284254e7c730623d4c27a2195dc9a7b8b6be"} Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.694212 4520 scope.go:117] "RemoveContainer" containerID="5938157b24a4a8ea88f5da07f7eac59f20cfc3de61c2d863c9e4c0d442d160b6" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.712969 4520 generic.go:334] "Generic (PLEG): container finished" podID="6ebd5875-2b47-4f0d-b8ad-15709cff81b9" containerID="3f3c389a6a602bb58ffafd71b32dcf9c4e720ebd0a2181926ebf4843e3f15604" exitCode=0 Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.713054 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kcz8t" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.713113 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kcz8t" event={"ID":"6ebd5875-2b47-4f0d-b8ad-15709cff81b9","Type":"ContainerDied","Data":"3f3c389a6a602bb58ffafd71b32dcf9c4e720ebd0a2181926ebf4843e3f15604"} Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.713147 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kcz8t" event={"ID":"6ebd5875-2b47-4f0d-b8ad-15709cff81b9","Type":"ContainerDied","Data":"b2a2cf53806eeadf1f17a28089d40caa84dc4aadbbd38a8e51fbb72c6e5126c2"} Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.738875 4520 scope.go:117] "RemoveContainer" containerID="4dab9ed5ebee81870f7f3152ec907bbeb10bceda1b02907ddb7a89f4a1e4cfd0" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.743757 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kcz8t"] Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.746770 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kcz8t"] Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.750287 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b5sch"] Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.758928 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b5sch"] Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.777287 4520 scope.go:117] "RemoveContainer" containerID="5d510149510700d8d090edf5a83b97424586b09f82eebc2e9fb8ff0c0841276b" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.778710 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-658bbd69-jpq96" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.796780 4520 scope.go:117] "RemoveContainer" containerID="5938157b24a4a8ea88f5da07f7eac59f20cfc3de61c2d863c9e4c0d442d160b6" Jan 30 06:48:14 crc kubenswrapper[4520]: E0130 06:48:14.797232 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5938157b24a4a8ea88f5da07f7eac59f20cfc3de61c2d863c9e4c0d442d160b6\": container with ID starting with 5938157b24a4a8ea88f5da07f7eac59f20cfc3de61c2d863c9e4c0d442d160b6 not found: ID does not exist" containerID="5938157b24a4a8ea88f5da07f7eac59f20cfc3de61c2d863c9e4c0d442d160b6" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.797266 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5938157b24a4a8ea88f5da07f7eac59f20cfc3de61c2d863c9e4c0d442d160b6"} err="failed to get container status \"5938157b24a4a8ea88f5da07f7eac59f20cfc3de61c2d863c9e4c0d442d160b6\": rpc error: code = NotFound desc = could not find container \"5938157b24a4a8ea88f5da07f7eac59f20cfc3de61c2d863c9e4c0d442d160b6\": container with ID starting with 5938157b24a4a8ea88f5da07f7eac59f20cfc3de61c2d863c9e4c0d442d160b6 not found: ID does not exist" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.797287 4520 scope.go:117] "RemoveContainer" containerID="4dab9ed5ebee81870f7f3152ec907bbeb10bceda1b02907ddb7a89f4a1e4cfd0" Jan 30 06:48:14 crc kubenswrapper[4520]: E0130 06:48:14.797504 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dab9ed5ebee81870f7f3152ec907bbeb10bceda1b02907ddb7a89f4a1e4cfd0\": container with ID starting with 4dab9ed5ebee81870f7f3152ec907bbeb10bceda1b02907ddb7a89f4a1e4cfd0 not found: ID does not exist" containerID="4dab9ed5ebee81870f7f3152ec907bbeb10bceda1b02907ddb7a89f4a1e4cfd0" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.797543 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dab9ed5ebee81870f7f3152ec907bbeb10bceda1b02907ddb7a89f4a1e4cfd0"} err="failed to get container status \"4dab9ed5ebee81870f7f3152ec907bbeb10bceda1b02907ddb7a89f4a1e4cfd0\": rpc error: code = NotFound desc = could not find container \"4dab9ed5ebee81870f7f3152ec907bbeb10bceda1b02907ddb7a89f4a1e4cfd0\": container with ID starting with 4dab9ed5ebee81870f7f3152ec907bbeb10bceda1b02907ddb7a89f4a1e4cfd0 not found: ID does not exist" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.797559 4520 scope.go:117] "RemoveContainer" containerID="5d510149510700d8d090edf5a83b97424586b09f82eebc2e9fb8ff0c0841276b" Jan 30 06:48:14 crc kubenswrapper[4520]: E0130 06:48:14.797828 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d510149510700d8d090edf5a83b97424586b09f82eebc2e9fb8ff0c0841276b\": container with ID starting with 5d510149510700d8d090edf5a83b97424586b09f82eebc2e9fb8ff0c0841276b not found: ID does not exist" containerID="5d510149510700d8d090edf5a83b97424586b09f82eebc2e9fb8ff0c0841276b" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.797848 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d510149510700d8d090edf5a83b97424586b09f82eebc2e9fb8ff0c0841276b"} err="failed to get container status \"5d510149510700d8d090edf5a83b97424586b09f82eebc2e9fb8ff0c0841276b\": rpc error: code = NotFound desc = could not find container \"5d510149510700d8d090edf5a83b97424586b09f82eebc2e9fb8ff0c0841276b\": container with ID starting with 5d510149510700d8d090edf5a83b97424586b09f82eebc2e9fb8ff0c0841276b not found: ID does not exist" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.797862 4520 scope.go:117] "RemoveContainer" containerID="3f3c389a6a602bb58ffafd71b32dcf9c4e720ebd0a2181926ebf4843e3f15604" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.814646 4520 scope.go:117] "RemoveContainer" containerID="1272e7c32f7f975483281d25c118066445f522fa6e5d56f2eeabc35a7724f367" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.841149 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deef6f4c-cf38-4133-9417-0bc7a3c999da-serving-cert\") pod \"deef6f4c-cf38-4133-9417-0bc7a3c999da\" (UID: \"deef6f4c-cf38-4133-9417-0bc7a3c999da\") " Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.841268 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfvgh\" (UniqueName: \"kubernetes.io/projected/deef6f4c-cf38-4133-9417-0bc7a3c999da-kube-api-access-pfvgh\") pod \"deef6f4c-cf38-4133-9417-0bc7a3c999da\" (UID: \"deef6f4c-cf38-4133-9417-0bc7a3c999da\") " Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.841303 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deef6f4c-cf38-4133-9417-0bc7a3c999da-config\") pod \"deef6f4c-cf38-4133-9417-0bc7a3c999da\" (UID: \"deef6f4c-cf38-4133-9417-0bc7a3c999da\") " Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.841402 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/deef6f4c-cf38-4133-9417-0bc7a3c999da-client-ca\") pod \"deef6f4c-cf38-4133-9417-0bc7a3c999da\" (UID: \"deef6f4c-cf38-4133-9417-0bc7a3c999da\") " Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.842431 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deef6f4c-cf38-4133-9417-0bc7a3c999da-client-ca" (OuterVolumeSpecName: "client-ca") pod "deef6f4c-cf38-4133-9417-0bc7a3c999da" (UID: "deef6f4c-cf38-4133-9417-0bc7a3c999da"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.843028 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deef6f4c-cf38-4133-9417-0bc7a3c999da-config" (OuterVolumeSpecName: "config") pod "deef6f4c-cf38-4133-9417-0bc7a3c999da" (UID: "deef6f4c-cf38-4133-9417-0bc7a3c999da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.846065 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deef6f4c-cf38-4133-9417-0bc7a3c999da-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "deef6f4c-cf38-4133-9417-0bc7a3c999da" (UID: "deef6f4c-cf38-4133-9417-0bc7a3c999da"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.846297 4520 scope.go:117] "RemoveContainer" containerID="d57738dde15e351845d0efc6289e547e1d2c034f26ddc0aded3c88de38573adf" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.854433 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deef6f4c-cf38-4133-9417-0bc7a3c999da-kube-api-access-pfvgh" (OuterVolumeSpecName: "kube-api-access-pfvgh") pod "deef6f4c-cf38-4133-9417-0bc7a3c999da" (UID: "deef6f4c-cf38-4133-9417-0bc7a3c999da"). InnerVolumeSpecName "kube-api-access-pfvgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.877672 4520 scope.go:117] "RemoveContainer" containerID="3f3c389a6a602bb58ffafd71b32dcf9c4e720ebd0a2181926ebf4843e3f15604" Jan 30 06:48:14 crc kubenswrapper[4520]: E0130 06:48:14.879804 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f3c389a6a602bb58ffafd71b32dcf9c4e720ebd0a2181926ebf4843e3f15604\": container with ID starting with 3f3c389a6a602bb58ffafd71b32dcf9c4e720ebd0a2181926ebf4843e3f15604 not found: ID does not exist" containerID="3f3c389a6a602bb58ffafd71b32dcf9c4e720ebd0a2181926ebf4843e3f15604" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.879842 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f3c389a6a602bb58ffafd71b32dcf9c4e720ebd0a2181926ebf4843e3f15604"} err="failed to get container status \"3f3c389a6a602bb58ffafd71b32dcf9c4e720ebd0a2181926ebf4843e3f15604\": rpc error: code = NotFound desc = could not find container \"3f3c389a6a602bb58ffafd71b32dcf9c4e720ebd0a2181926ebf4843e3f15604\": container with ID starting with 3f3c389a6a602bb58ffafd71b32dcf9c4e720ebd0a2181926ebf4843e3f15604 not found: ID does not exist" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.879874 4520 scope.go:117] "RemoveContainer" containerID="1272e7c32f7f975483281d25c118066445f522fa6e5d56f2eeabc35a7724f367" Jan 30 06:48:14 crc kubenswrapper[4520]: E0130 06:48:14.880315 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1272e7c32f7f975483281d25c118066445f522fa6e5d56f2eeabc35a7724f367\": container with ID starting with 1272e7c32f7f975483281d25c118066445f522fa6e5d56f2eeabc35a7724f367 not found: ID does not exist" containerID="1272e7c32f7f975483281d25c118066445f522fa6e5d56f2eeabc35a7724f367" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.880346 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1272e7c32f7f975483281d25c118066445f522fa6e5d56f2eeabc35a7724f367"} err="failed to get container status \"1272e7c32f7f975483281d25c118066445f522fa6e5d56f2eeabc35a7724f367\": rpc error: code = NotFound desc = could not find container \"1272e7c32f7f975483281d25c118066445f522fa6e5d56f2eeabc35a7724f367\": container with ID starting with 1272e7c32f7f975483281d25c118066445f522fa6e5d56f2eeabc35a7724f367 not found: ID does not exist" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.880368 4520 scope.go:117] "RemoveContainer" containerID="d57738dde15e351845d0efc6289e547e1d2c034f26ddc0aded3c88de38573adf" Jan 30 06:48:14 crc kubenswrapper[4520]: E0130 06:48:14.880959 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d57738dde15e351845d0efc6289e547e1d2c034f26ddc0aded3c88de38573adf\": container with ID starting with d57738dde15e351845d0efc6289e547e1d2c034f26ddc0aded3c88de38573adf not found: ID does not exist" containerID="d57738dde15e351845d0efc6289e547e1d2c034f26ddc0aded3c88de38573adf" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.880988 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d57738dde15e351845d0efc6289e547e1d2c034f26ddc0aded3c88de38573adf"} err="failed to get container status \"d57738dde15e351845d0efc6289e547e1d2c034f26ddc0aded3c88de38573adf\": rpc error: code = NotFound desc = could not find container \"d57738dde15e351845d0efc6289e547e1d2c034f26ddc0aded3c88de38573adf\": container with ID starting with d57738dde15e351845d0efc6289e547e1d2c034f26ddc0aded3c88de38573adf not found: ID does not exist" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.907839 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j789c"] Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.944169 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfvgh\" (UniqueName: \"kubernetes.io/projected/deef6f4c-cf38-4133-9417-0bc7a3c999da-kube-api-access-pfvgh\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.944215 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deef6f4c-cf38-4133-9417-0bc7a3c999da-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.944230 4520 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/deef6f4c-cf38-4133-9417-0bc7a3c999da-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.944241 4520 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deef6f4c-cf38-4133-9417-0bc7a3c999da-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.948766 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dbcfd8d67-xphrz" Jan 30 06:48:14 crc kubenswrapper[4520]: I0130 06:48:14.962662 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.045397 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/167050d8-bab9-46b3-8fd1-8c5355c15653-config\") pod \"167050d8-bab9-46b3-8fd1-8c5355c15653\" (UID: \"167050d8-bab9-46b3-8fd1-8c5355c15653\") " Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.045448 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/167050d8-bab9-46b3-8fd1-8c5355c15653-proxy-ca-bundles\") pod \"167050d8-bab9-46b3-8fd1-8c5355c15653\" (UID: \"167050d8-bab9-46b3-8fd1-8c5355c15653\") " Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.045507 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/167050d8-bab9-46b3-8fd1-8c5355c15653-serving-cert\") pod \"167050d8-bab9-46b3-8fd1-8c5355c15653\" (UID: \"167050d8-bab9-46b3-8fd1-8c5355c15653\") " Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.045549 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/167050d8-bab9-46b3-8fd1-8c5355c15653-client-ca\") pod \"167050d8-bab9-46b3-8fd1-8c5355c15653\" (UID: \"167050d8-bab9-46b3-8fd1-8c5355c15653\") " Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.045590 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kx6r6\" (UniqueName: \"kubernetes.io/projected/167050d8-bab9-46b3-8fd1-8c5355c15653-kube-api-access-kx6r6\") pod \"167050d8-bab9-46b3-8fd1-8c5355c15653\" (UID: \"167050d8-bab9-46b3-8fd1-8c5355c15653\") " Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.046441 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/167050d8-bab9-46b3-8fd1-8c5355c15653-client-ca" (OuterVolumeSpecName: "client-ca") pod "167050d8-bab9-46b3-8fd1-8c5355c15653" (UID: "167050d8-bab9-46b3-8fd1-8c5355c15653"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.046670 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/167050d8-bab9-46b3-8fd1-8c5355c15653-config" (OuterVolumeSpecName: "config") pod "167050d8-bab9-46b3-8fd1-8c5355c15653" (UID: "167050d8-bab9-46b3-8fd1-8c5355c15653"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.046818 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/167050d8-bab9-46b3-8fd1-8c5355c15653-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "167050d8-bab9-46b3-8fd1-8c5355c15653" (UID: "167050d8-bab9-46b3-8fd1-8c5355c15653"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.048411 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/167050d8-bab9-46b3-8fd1-8c5355c15653-kube-api-access-kx6r6" (OuterVolumeSpecName: "kube-api-access-kx6r6") pod "167050d8-bab9-46b3-8fd1-8c5355c15653" (UID: "167050d8-bab9-46b3-8fd1-8c5355c15653"). InnerVolumeSpecName "kube-api-access-kx6r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.048509 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/167050d8-bab9-46b3-8fd1-8c5355c15653-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "167050d8-bab9-46b3-8fd1-8c5355c15653" (UID: "167050d8-bab9-46b3-8fd1-8c5355c15653"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.146887 4520 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/167050d8-bab9-46b3-8fd1-8c5355c15653-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.147173 4520 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/167050d8-bab9-46b3-8fd1-8c5355c15653-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.147185 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kx6r6\" (UniqueName: \"kubernetes.io/projected/167050d8-bab9-46b3-8fd1-8c5355c15653-kube-api-access-kx6r6\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.147201 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/167050d8-bab9-46b3-8fd1-8c5355c15653-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.147212 4520 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/167050d8-bab9-46b3-8fd1-8c5355c15653-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.450452 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cf4668589-ntkwf"] Jan 30 06:48:15 crc kubenswrapper[4520]: E0130 06:48:15.450777 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40fa3317-086a-4e6e-bc50-3d267cb056f9" containerName="registry-server" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.450793 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="40fa3317-086a-4e6e-bc50-3d267cb056f9" containerName="registry-server" Jan 30 06:48:15 crc kubenswrapper[4520]: E0130 06:48:15.450812 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deef6f4c-cf38-4133-9417-0bc7a3c999da" containerName="route-controller-manager" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.450819 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="deef6f4c-cf38-4133-9417-0bc7a3c999da" containerName="route-controller-manager" Jan 30 06:48:15 crc kubenswrapper[4520]: E0130 06:48:15.450827 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40fa3317-086a-4e6e-bc50-3d267cb056f9" containerName="extract-utilities" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.450834 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="40fa3317-086a-4e6e-bc50-3d267cb056f9" containerName="extract-utilities" Jan 30 06:48:15 crc kubenswrapper[4520]: E0130 06:48:15.450843 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ebd5875-2b47-4f0d-b8ad-15709cff81b9" containerName="extract-content" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.450849 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ebd5875-2b47-4f0d-b8ad-15709cff81b9" containerName="extract-content" Jan 30 06:48:15 crc kubenswrapper[4520]: E0130 06:48:15.450859 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40fa3317-086a-4e6e-bc50-3d267cb056f9" containerName="extract-content" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.450866 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="40fa3317-086a-4e6e-bc50-3d267cb056f9" containerName="extract-content" Jan 30 06:48:15 crc kubenswrapper[4520]: E0130 06:48:15.450876 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ebd5875-2b47-4f0d-b8ad-15709cff81b9" containerName="extract-utilities" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.450882 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ebd5875-2b47-4f0d-b8ad-15709cff81b9" containerName="extract-utilities" Jan 30 06:48:15 crc kubenswrapper[4520]: E0130 06:48:15.450895 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="167050d8-bab9-46b3-8fd1-8c5355c15653" containerName="controller-manager" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.450913 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="167050d8-bab9-46b3-8fd1-8c5355c15653" containerName="controller-manager" Jan 30 06:48:15 crc kubenswrapper[4520]: E0130 06:48:15.450920 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ebd5875-2b47-4f0d-b8ad-15709cff81b9" containerName="registry-server" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.450927 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ebd5875-2b47-4f0d-b8ad-15709cff81b9" containerName="registry-server" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.451036 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="40fa3317-086a-4e6e-bc50-3d267cb056f9" containerName="registry-server" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.451043 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ebd5875-2b47-4f0d-b8ad-15709cff81b9" containerName="registry-server" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.451053 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="167050d8-bab9-46b3-8fd1-8c5355c15653" containerName="controller-manager" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.451061 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="deef6f4c-cf38-4133-9417-0bc7a3c999da" containerName="route-controller-manager" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.451610 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cf4668589-ntkwf" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.453826 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5696d4555f-j6m67"] Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.454865 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5696d4555f-j6m67" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.477465 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5696d4555f-j6m67"] Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.504846 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cf4668589-ntkwf"] Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.551440 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1eac717c-d126-4e2d-8bae-ef99f07ac430-serving-cert\") pod \"route-controller-manager-6cf4668589-ntkwf\" (UID: \"1eac717c-d126-4e2d-8bae-ef99f07ac430\") " pod="openshift-route-controller-manager/route-controller-manager-6cf4668589-ntkwf" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.551502 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6654e22-693d-4ffb-9fa9-56a1d7133c35-client-ca\") pod \"controller-manager-5696d4555f-j6m67\" (UID: \"e6654e22-693d-4ffb-9fa9-56a1d7133c35\") " pod="openshift-controller-manager/controller-manager-5696d4555f-j6m67" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.551624 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1eac717c-d126-4e2d-8bae-ef99f07ac430-client-ca\") pod \"route-controller-manager-6cf4668589-ntkwf\" (UID: \"1eac717c-d126-4e2d-8bae-ef99f07ac430\") " pod="openshift-route-controller-manager/route-controller-manager-6cf4668589-ntkwf" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.551686 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eac717c-d126-4e2d-8bae-ef99f07ac430-config\") pod \"route-controller-manager-6cf4668589-ntkwf\" (UID: \"1eac717c-d126-4e2d-8bae-ef99f07ac430\") " pod="openshift-route-controller-manager/route-controller-manager-6cf4668589-ntkwf" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.551828 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64h2h\" (UniqueName: \"kubernetes.io/projected/1eac717c-d126-4e2d-8bae-ef99f07ac430-kube-api-access-64h2h\") pod \"route-controller-manager-6cf4668589-ntkwf\" (UID: \"1eac717c-d126-4e2d-8bae-ef99f07ac430\") " pod="openshift-route-controller-manager/route-controller-manager-6cf4668589-ntkwf" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.551925 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6654e22-693d-4ffb-9fa9-56a1d7133c35-config\") pod \"controller-manager-5696d4555f-j6m67\" (UID: \"e6654e22-693d-4ffb-9fa9-56a1d7133c35\") " pod="openshift-controller-manager/controller-manager-5696d4555f-j6m67" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.551993 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6654e22-693d-4ffb-9fa9-56a1d7133c35-proxy-ca-bundles\") pod \"controller-manager-5696d4555f-j6m67\" (UID: \"e6654e22-693d-4ffb-9fa9-56a1d7133c35\") " pod="openshift-controller-manager/controller-manager-5696d4555f-j6m67" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.552142 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rzzx\" (UniqueName: \"kubernetes.io/projected/e6654e22-693d-4ffb-9fa9-56a1d7133c35-kube-api-access-7rzzx\") pod \"controller-manager-5696d4555f-j6m67\" (UID: \"e6654e22-693d-4ffb-9fa9-56a1d7133c35\") " pod="openshift-controller-manager/controller-manager-5696d4555f-j6m67" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.552183 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6654e22-693d-4ffb-9fa9-56a1d7133c35-serving-cert\") pod \"controller-manager-5696d4555f-j6m67\" (UID: \"e6654e22-693d-4ffb-9fa9-56a1d7133c35\") " pod="openshift-controller-manager/controller-manager-5696d4555f-j6m67" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.652798 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64h2h\" (UniqueName: \"kubernetes.io/projected/1eac717c-d126-4e2d-8bae-ef99f07ac430-kube-api-access-64h2h\") pod \"route-controller-manager-6cf4668589-ntkwf\" (UID: \"1eac717c-d126-4e2d-8bae-ef99f07ac430\") " pod="openshift-route-controller-manager/route-controller-manager-6cf4668589-ntkwf" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.652853 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6654e22-693d-4ffb-9fa9-56a1d7133c35-proxy-ca-bundles\") pod \"controller-manager-5696d4555f-j6m67\" (UID: \"e6654e22-693d-4ffb-9fa9-56a1d7133c35\") " pod="openshift-controller-manager/controller-manager-5696d4555f-j6m67" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.652884 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6654e22-693d-4ffb-9fa9-56a1d7133c35-config\") pod \"controller-manager-5696d4555f-j6m67\" (UID: \"e6654e22-693d-4ffb-9fa9-56a1d7133c35\") " pod="openshift-controller-manager/controller-manager-5696d4555f-j6m67" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.652945 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rzzx\" (UniqueName: \"kubernetes.io/projected/e6654e22-693d-4ffb-9fa9-56a1d7133c35-kube-api-access-7rzzx\") pod \"controller-manager-5696d4555f-j6m67\" (UID: \"e6654e22-693d-4ffb-9fa9-56a1d7133c35\") " pod="openshift-controller-manager/controller-manager-5696d4555f-j6m67" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.652972 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6654e22-693d-4ffb-9fa9-56a1d7133c35-serving-cert\") pod \"controller-manager-5696d4555f-j6m67\" (UID: \"e6654e22-693d-4ffb-9fa9-56a1d7133c35\") " pod="openshift-controller-manager/controller-manager-5696d4555f-j6m67" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.653003 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1eac717c-d126-4e2d-8bae-ef99f07ac430-serving-cert\") pod \"route-controller-manager-6cf4668589-ntkwf\" (UID: \"1eac717c-d126-4e2d-8bae-ef99f07ac430\") " pod="openshift-route-controller-manager/route-controller-manager-6cf4668589-ntkwf" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.653024 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6654e22-693d-4ffb-9fa9-56a1d7133c35-client-ca\") pod \"controller-manager-5696d4555f-j6m67\" (UID: \"e6654e22-693d-4ffb-9fa9-56a1d7133c35\") " pod="openshift-controller-manager/controller-manager-5696d4555f-j6m67" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.653048 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1eac717c-d126-4e2d-8bae-ef99f07ac430-client-ca\") pod \"route-controller-manager-6cf4668589-ntkwf\" (UID: \"1eac717c-d126-4e2d-8bae-ef99f07ac430\") " pod="openshift-route-controller-manager/route-controller-manager-6cf4668589-ntkwf" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.653077 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eac717c-d126-4e2d-8bae-ef99f07ac430-config\") pod \"route-controller-manager-6cf4668589-ntkwf\" (UID: \"1eac717c-d126-4e2d-8bae-ef99f07ac430\") " pod="openshift-route-controller-manager/route-controller-manager-6cf4668589-ntkwf" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.654543 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1eac717c-d126-4e2d-8bae-ef99f07ac430-client-ca\") pod \"route-controller-manager-6cf4668589-ntkwf\" (UID: \"1eac717c-d126-4e2d-8bae-ef99f07ac430\") " pod="openshift-route-controller-manager/route-controller-manager-6cf4668589-ntkwf" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.654790 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eac717c-d126-4e2d-8bae-ef99f07ac430-config\") pod \"route-controller-manager-6cf4668589-ntkwf\" (UID: \"1eac717c-d126-4e2d-8bae-ef99f07ac430\") " pod="openshift-route-controller-manager/route-controller-manager-6cf4668589-ntkwf" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.655001 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6654e22-693d-4ffb-9fa9-56a1d7133c35-client-ca\") pod \"controller-manager-5696d4555f-j6m67\" (UID: \"e6654e22-693d-4ffb-9fa9-56a1d7133c35\") " pod="openshift-controller-manager/controller-manager-5696d4555f-j6m67" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.655480 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6654e22-693d-4ffb-9fa9-56a1d7133c35-config\") pod \"controller-manager-5696d4555f-j6m67\" (UID: \"e6654e22-693d-4ffb-9fa9-56a1d7133c35\") " pod="openshift-controller-manager/controller-manager-5696d4555f-j6m67" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.655729 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6654e22-693d-4ffb-9fa9-56a1d7133c35-proxy-ca-bundles\") pod \"controller-manager-5696d4555f-j6m67\" (UID: \"e6654e22-693d-4ffb-9fa9-56a1d7133c35\") " pod="openshift-controller-manager/controller-manager-5696d4555f-j6m67" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.658702 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1eac717c-d126-4e2d-8bae-ef99f07ac430-serving-cert\") pod \"route-controller-manager-6cf4668589-ntkwf\" (UID: \"1eac717c-d126-4e2d-8bae-ef99f07ac430\") " pod="openshift-route-controller-manager/route-controller-manager-6cf4668589-ntkwf" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.659502 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6654e22-693d-4ffb-9fa9-56a1d7133c35-serving-cert\") pod \"controller-manager-5696d4555f-j6m67\" (UID: \"e6654e22-693d-4ffb-9fa9-56a1d7133c35\") " pod="openshift-controller-manager/controller-manager-5696d4555f-j6m67" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.669908 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64h2h\" (UniqueName: \"kubernetes.io/projected/1eac717c-d126-4e2d-8bae-ef99f07ac430-kube-api-access-64h2h\") pod \"route-controller-manager-6cf4668589-ntkwf\" (UID: \"1eac717c-d126-4e2d-8bae-ef99f07ac430\") " pod="openshift-route-controller-manager/route-controller-manager-6cf4668589-ntkwf" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.670568 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rzzx\" (UniqueName: \"kubernetes.io/projected/e6654e22-693d-4ffb-9fa9-56a1d7133c35-kube-api-access-7rzzx\") pod \"controller-manager-5696d4555f-j6m67\" (UID: \"e6654e22-693d-4ffb-9fa9-56a1d7133c35\") " pod="openshift-controller-manager/controller-manager-5696d4555f-j6m67" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.722071 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"45d3e526-f114-4fc9-8b7c-a77ec3ae6a95","Type":"ContainerStarted","Data":"91ce8e86acd8c4fd243cdedc6650f216381ae501067b9765d08b50689366b63e"} Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.722136 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"45d3e526-f114-4fc9-8b7c-a77ec3ae6a95","Type":"ContainerStarted","Data":"e1d9586d053abcbdb37a498c80e3ab7319ab06a9481e88168655b77eafa321f7"} Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.723601 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dbcfd8d67-xphrz" event={"ID":"167050d8-bab9-46b3-8fd1-8c5355c15653","Type":"ContainerDied","Data":"73133593f0884a7747c3cd7c540ef475733749f23b91d4596f7d5a86633fae5b"} Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.723645 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dbcfd8d67-xphrz" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.723665 4520 scope.go:117] "RemoveContainer" containerID="7980ccee48cfbc86312d9652acc89d22485fe7f10b4cd446b562af0b2f7a83e8" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.725841 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-658bbd69-jpq96" event={"ID":"deef6f4c-cf38-4133-9417-0bc7a3c999da","Type":"ContainerDied","Data":"7c36cf86205d710aaeb8af0177a68ffe9518abbbd82f297436d1279e85374ddd"} Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.725904 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-658bbd69-jpq96" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.727575 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j789c" podUID="53876f72-b696-4749-9677-8aed346a928b" containerName="registry-server" containerID="cri-o://0bea417c3c11a586f61a8ab91152738955892629315a3971c57e1e5f685fae24" gracePeriod=2 Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.741444 4520 scope.go:117] "RemoveContainer" containerID="3c4ae39e877a9694e81d450d2874aecbb01587370ff5d42d386f18df44e585e4" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.750748 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.750738352 podStartE2EDuration="1.750738352s" podCreationTimestamp="2026-01-30 06:48:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:48:15.744820892 +0000 UTC m=+209.373173073" watchObservedRunningTime="2026-01-30 06:48:15.750738352 +0000 UTC m=+209.379090532" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.764715 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cf4668589-ntkwf" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.769999 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5696d4555f-j6m67" Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.780065 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-658bbd69-jpq96"] Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.782466 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-658bbd69-jpq96"] Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.792203 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6dbcfd8d67-xphrz"] Jan 30 06:48:15 crc kubenswrapper[4520]: I0130 06:48:15.796365 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6dbcfd8d67-xphrz"] Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.118261 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j789c" Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.159742 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53876f72-b696-4749-9677-8aed346a928b-utilities\") pod \"53876f72-b696-4749-9677-8aed346a928b\" (UID: \"53876f72-b696-4749-9677-8aed346a928b\") " Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.159811 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttxzl\" (UniqueName: \"kubernetes.io/projected/53876f72-b696-4749-9677-8aed346a928b-kube-api-access-ttxzl\") pod \"53876f72-b696-4749-9677-8aed346a928b\" (UID: \"53876f72-b696-4749-9677-8aed346a928b\") " Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.159846 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53876f72-b696-4749-9677-8aed346a928b-catalog-content\") pod \"53876f72-b696-4749-9677-8aed346a928b\" (UID: \"53876f72-b696-4749-9677-8aed346a928b\") " Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.161322 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cf4668589-ntkwf"] Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.162399 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53876f72-b696-4749-9677-8aed346a928b-utilities" (OuterVolumeSpecName: "utilities") pod "53876f72-b696-4749-9677-8aed346a928b" (UID: "53876f72-b696-4749-9677-8aed346a928b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.167123 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53876f72-b696-4749-9677-8aed346a928b-kube-api-access-ttxzl" (OuterVolumeSpecName: "kube-api-access-ttxzl") pod "53876f72-b696-4749-9677-8aed346a928b" (UID: "53876f72-b696-4749-9677-8aed346a928b"). InnerVolumeSpecName "kube-api-access-ttxzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.180197 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53876f72-b696-4749-9677-8aed346a928b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53876f72-b696-4749-9677-8aed346a928b" (UID: "53876f72-b696-4749-9677-8aed346a928b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.239137 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5696d4555f-j6m67"] Jan 30 06:48:16 crc kubenswrapper[4520]: W0130 06:48:16.249474 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6654e22_693d_4ffb_9fa9_56a1d7133c35.slice/crio-502725a5c23dd32dbe061dc2f009993c454fc4cfa9be60a3fae83d07152f4324 WatchSource:0}: Error finding container 502725a5c23dd32dbe061dc2f009993c454fc4cfa9be60a3fae83d07152f4324: Status 404 returned error can't find the container with id 502725a5c23dd32dbe061dc2f009993c454fc4cfa9be60a3fae83d07152f4324 Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.262111 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53876f72-b696-4749-9677-8aed346a928b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.262146 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53876f72-b696-4749-9677-8aed346a928b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.262159 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttxzl\" (UniqueName: \"kubernetes.io/projected/53876f72-b696-4749-9677-8aed346a928b-kube-api-access-ttxzl\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.692592 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="167050d8-bab9-46b3-8fd1-8c5355c15653" path="/var/lib/kubelet/pods/167050d8-bab9-46b3-8fd1-8c5355c15653/volumes" Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.693151 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40fa3317-086a-4e6e-bc50-3d267cb056f9" path="/var/lib/kubelet/pods/40fa3317-086a-4e6e-bc50-3d267cb056f9/volumes" Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.693685 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ebd5875-2b47-4f0d-b8ad-15709cff81b9" path="/var/lib/kubelet/pods/6ebd5875-2b47-4f0d-b8ad-15709cff81b9/volumes" Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.694223 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deef6f4c-cf38-4133-9417-0bc7a3c999da" path="/var/lib/kubelet/pods/deef6f4c-cf38-4133-9417-0bc7a3c999da/volumes" Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.735129 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cf4668589-ntkwf" event={"ID":"1eac717c-d126-4e2d-8bae-ef99f07ac430","Type":"ContainerStarted","Data":"ddad5cbebd15c88b2ed85302473c797328f58f4a8194c00965529132b1e1e92f"} Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.735353 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cf4668589-ntkwf" event={"ID":"1eac717c-d126-4e2d-8bae-ef99f07ac430","Type":"ContainerStarted","Data":"71f1355193ea0c0cb60aff1d7df19ead1f4a3ff2b195b62556d828b1ac3befa7"} Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.735371 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6cf4668589-ntkwf" Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.737053 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5696d4555f-j6m67" event={"ID":"e6654e22-693d-4ffb-9fa9-56a1d7133c35","Type":"ContainerStarted","Data":"1596ebb3cc887d60a4ab5d787b74ddef4f1cc244cf86e9b489e58bdb706d2fc0"} Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.737078 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5696d4555f-j6m67" event={"ID":"e6654e22-693d-4ffb-9fa9-56a1d7133c35","Type":"ContainerStarted","Data":"502725a5c23dd32dbe061dc2f009993c454fc4cfa9be60a3fae83d07152f4324"} Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.737297 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5696d4555f-j6m67" Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.739096 4520 generic.go:334] "Generic (PLEG): container finished" podID="53876f72-b696-4749-9677-8aed346a928b" containerID="0bea417c3c11a586f61a8ab91152738955892629315a3971c57e1e5f685fae24" exitCode=0 Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.739133 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j789c" event={"ID":"53876f72-b696-4749-9677-8aed346a928b","Type":"ContainerDied","Data":"0bea417c3c11a586f61a8ab91152738955892629315a3971c57e1e5f685fae24"} Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.739147 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j789c" Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.739173 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j789c" event={"ID":"53876f72-b696-4749-9677-8aed346a928b","Type":"ContainerDied","Data":"f9e16647f30dbce40234aad8346d5b8ccc3f6a9d1735cecdeb29f0e5eefa522d"} Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.739194 4520 scope.go:117] "RemoveContainer" containerID="0bea417c3c11a586f61a8ab91152738955892629315a3971c57e1e5f685fae24" Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.745428 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5696d4555f-j6m67" Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.746188 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6cf4668589-ntkwf" Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.755379 4520 scope.go:117] "RemoveContainer" containerID="036fd1b51c3bf37b5cc70b27f1dc987918b1d5798c91ca5ea5b0e17ef235e9ce" Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.755704 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6cf4668589-ntkwf" podStartSLOduration=2.755693207 podStartE2EDuration="2.755693207s" podCreationTimestamp="2026-01-30 06:48:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:48:16.752626714 +0000 UTC m=+210.380978895" watchObservedRunningTime="2026-01-30 06:48:16.755693207 +0000 UTC m=+210.384045377" Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.763588 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j789c"] Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.765258 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j789c"] Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.769311 4520 scope.go:117] "RemoveContainer" containerID="b70f3d9e328245b73e28e2e3c25933c0f8d24a4638aa8dcee8c7b8a1371543cc" Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.781891 4520 scope.go:117] "RemoveContainer" containerID="0bea417c3c11a586f61a8ab91152738955892629315a3971c57e1e5f685fae24" Jan 30 06:48:16 crc kubenswrapper[4520]: E0130 06:48:16.782336 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bea417c3c11a586f61a8ab91152738955892629315a3971c57e1e5f685fae24\": container with ID starting with 0bea417c3c11a586f61a8ab91152738955892629315a3971c57e1e5f685fae24 not found: ID does not exist" containerID="0bea417c3c11a586f61a8ab91152738955892629315a3971c57e1e5f685fae24" Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.782374 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bea417c3c11a586f61a8ab91152738955892629315a3971c57e1e5f685fae24"} err="failed to get container status \"0bea417c3c11a586f61a8ab91152738955892629315a3971c57e1e5f685fae24\": rpc error: code = NotFound desc = could not find container \"0bea417c3c11a586f61a8ab91152738955892629315a3971c57e1e5f685fae24\": container with ID starting with 0bea417c3c11a586f61a8ab91152738955892629315a3971c57e1e5f685fae24 not found: ID does not exist" Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.782415 4520 scope.go:117] "RemoveContainer" containerID="036fd1b51c3bf37b5cc70b27f1dc987918b1d5798c91ca5ea5b0e17ef235e9ce" Jan 30 06:48:16 crc kubenswrapper[4520]: E0130 06:48:16.784078 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"036fd1b51c3bf37b5cc70b27f1dc987918b1d5798c91ca5ea5b0e17ef235e9ce\": container with ID starting with 036fd1b51c3bf37b5cc70b27f1dc987918b1d5798c91ca5ea5b0e17ef235e9ce not found: ID does not exist" containerID="036fd1b51c3bf37b5cc70b27f1dc987918b1d5798c91ca5ea5b0e17ef235e9ce" Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.784110 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"036fd1b51c3bf37b5cc70b27f1dc987918b1d5798c91ca5ea5b0e17ef235e9ce"} err="failed to get container status \"036fd1b51c3bf37b5cc70b27f1dc987918b1d5798c91ca5ea5b0e17ef235e9ce\": rpc error: code = NotFound desc = could not find container \"036fd1b51c3bf37b5cc70b27f1dc987918b1d5798c91ca5ea5b0e17ef235e9ce\": container with ID starting with 036fd1b51c3bf37b5cc70b27f1dc987918b1d5798c91ca5ea5b0e17ef235e9ce not found: ID does not exist" Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.784131 4520 scope.go:117] "RemoveContainer" containerID="b70f3d9e328245b73e28e2e3c25933c0f8d24a4638aa8dcee8c7b8a1371543cc" Jan 30 06:48:16 crc kubenswrapper[4520]: E0130 06:48:16.785011 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b70f3d9e328245b73e28e2e3c25933c0f8d24a4638aa8dcee8c7b8a1371543cc\": container with ID starting with b70f3d9e328245b73e28e2e3c25933c0f8d24a4638aa8dcee8c7b8a1371543cc not found: ID does not exist" containerID="b70f3d9e328245b73e28e2e3c25933c0f8d24a4638aa8dcee8c7b8a1371543cc" Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.785046 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b70f3d9e328245b73e28e2e3c25933c0f8d24a4638aa8dcee8c7b8a1371543cc"} err="failed to get container status \"b70f3d9e328245b73e28e2e3c25933c0f8d24a4638aa8dcee8c7b8a1371543cc\": rpc error: code = NotFound desc = could not find container \"b70f3d9e328245b73e28e2e3c25933c0f8d24a4638aa8dcee8c7b8a1371543cc\": container with ID starting with b70f3d9e328245b73e28e2e3c25933c0f8d24a4638aa8dcee8c7b8a1371543cc not found: ID does not exist" Jan 30 06:48:16 crc kubenswrapper[4520]: I0130 06:48:16.797703 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5696d4555f-j6m67" podStartSLOduration=2.797687827 podStartE2EDuration="2.797687827s" podCreationTimestamp="2026-01-30 06:48:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:48:16.794129509 +0000 UTC m=+210.422481690" watchObservedRunningTime="2026-01-30 06:48:16.797687827 +0000 UTC m=+210.426040008" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.453711 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6686467b65-4qb7w"] Jan 30 06:48:18 crc kubenswrapper[4520]: E0130 06:48:18.454968 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53876f72-b696-4749-9677-8aed346a928b" containerName="extract-content" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.455001 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="53876f72-b696-4749-9677-8aed346a928b" containerName="extract-content" Jan 30 06:48:18 crc kubenswrapper[4520]: E0130 06:48:18.455027 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53876f72-b696-4749-9677-8aed346a928b" containerName="extract-utilities" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.455034 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="53876f72-b696-4749-9677-8aed346a928b" containerName="extract-utilities" Jan 30 06:48:18 crc kubenswrapper[4520]: E0130 06:48:18.455042 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53876f72-b696-4749-9677-8aed346a928b" containerName="registry-server" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.455048 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="53876f72-b696-4749-9677-8aed346a928b" containerName="registry-server" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.455154 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="53876f72-b696-4749-9677-8aed346a928b" containerName="registry-server" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.455705 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.458583 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.458587 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.458900 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.460094 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.462206 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.462726 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.462750 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.462858 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.462956 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.463134 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.463455 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.463807 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.469240 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6686467b65-4qb7w"] Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.477542 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.479209 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.482654 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.490699 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/97fba751-b99c-4b44-9ffd-06e6e7344680-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.490741 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/97fba751-b99c-4b44-9ffd-06e6e7344680-audit-policies\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.490765 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/97fba751-b99c-4b44-9ffd-06e6e7344680-v4-0-config-system-router-certs\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.490792 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/97fba751-b99c-4b44-9ffd-06e6e7344680-v4-0-config-system-service-ca\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.490809 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97fba751-b99c-4b44-9ffd-06e6e7344680-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.490827 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/97fba751-b99c-4b44-9ffd-06e6e7344680-v4-0-config-user-template-error\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.490843 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/97fba751-b99c-4b44-9ffd-06e6e7344680-v4-0-config-system-session\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.490864 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/97fba751-b99c-4b44-9ffd-06e6e7344680-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.490883 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/97fba751-b99c-4b44-9ffd-06e6e7344680-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.490899 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/97fba751-b99c-4b44-9ffd-06e6e7344680-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.490919 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/97fba751-b99c-4b44-9ffd-06e6e7344680-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.490938 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/97fba751-b99c-4b44-9ffd-06e6e7344680-audit-dir\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.490959 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5r59\" (UniqueName: \"kubernetes.io/projected/97fba751-b99c-4b44-9ffd-06e6e7344680-kube-api-access-c5r59\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.490983 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/97fba751-b99c-4b44-9ffd-06e6e7344680-v4-0-config-user-template-login\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.592710 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/97fba751-b99c-4b44-9ffd-06e6e7344680-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.592763 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/97fba751-b99c-4b44-9ffd-06e6e7344680-audit-policies\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.592790 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/97fba751-b99c-4b44-9ffd-06e6e7344680-v4-0-config-system-router-certs\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.592813 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/97fba751-b99c-4b44-9ffd-06e6e7344680-v4-0-config-system-service-ca\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.592838 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97fba751-b99c-4b44-9ffd-06e6e7344680-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.592863 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/97fba751-b99c-4b44-9ffd-06e6e7344680-v4-0-config-system-session\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.592883 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/97fba751-b99c-4b44-9ffd-06e6e7344680-v4-0-config-user-template-error\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.592913 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/97fba751-b99c-4b44-9ffd-06e6e7344680-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.592937 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/97fba751-b99c-4b44-9ffd-06e6e7344680-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.592962 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/97fba751-b99c-4b44-9ffd-06e6e7344680-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.592997 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/97fba751-b99c-4b44-9ffd-06e6e7344680-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.593059 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/97fba751-b99c-4b44-9ffd-06e6e7344680-audit-dir\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.593103 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5r59\" (UniqueName: \"kubernetes.io/projected/97fba751-b99c-4b44-9ffd-06e6e7344680-kube-api-access-c5r59\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.593131 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/97fba751-b99c-4b44-9ffd-06e6e7344680-v4-0-config-user-template-login\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.594296 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/97fba751-b99c-4b44-9ffd-06e6e7344680-v4-0-config-system-service-ca\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.594652 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97fba751-b99c-4b44-9ffd-06e6e7344680-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.595100 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/97fba751-b99c-4b44-9ffd-06e6e7344680-audit-policies\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.595473 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/97fba751-b99c-4b44-9ffd-06e6e7344680-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.595571 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/97fba751-b99c-4b44-9ffd-06e6e7344680-audit-dir\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.600980 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/97fba751-b99c-4b44-9ffd-06e6e7344680-v4-0-config-user-template-error\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.602878 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/97fba751-b99c-4b44-9ffd-06e6e7344680-v4-0-config-system-session\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.603830 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/97fba751-b99c-4b44-9ffd-06e6e7344680-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.604357 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/97fba751-b99c-4b44-9ffd-06e6e7344680-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.604455 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/97fba751-b99c-4b44-9ffd-06e6e7344680-v4-0-config-user-template-login\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.606226 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/97fba751-b99c-4b44-9ffd-06e6e7344680-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.608917 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/97fba751-b99c-4b44-9ffd-06e6e7344680-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.611006 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/97fba751-b99c-4b44-9ffd-06e6e7344680-v4-0-config-system-router-certs\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.611549 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5r59\" (UniqueName: \"kubernetes.io/projected/97fba751-b99c-4b44-9ffd-06e6e7344680-kube-api-access-c5r59\") pod \"oauth-openshift-6686467b65-4qb7w\" (UID: \"97fba751-b99c-4b44-9ffd-06e6e7344680\") " pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.698863 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53876f72-b696-4749-9677-8aed346a928b" path="/var/lib/kubelet/pods/53876f72-b696-4749-9677-8aed346a928b/volumes" Jan 30 06:48:18 crc kubenswrapper[4520]: I0130 06:48:18.769534 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:19 crc kubenswrapper[4520]: I0130 06:48:19.142909 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6686467b65-4qb7w"] Jan 30 06:48:19 crc kubenswrapper[4520]: I0130 06:48:19.774770 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" event={"ID":"97fba751-b99c-4b44-9ffd-06e6e7344680","Type":"ContainerStarted","Data":"c2ee130e090a1059087b7cef46dc305bc7d33cf7086869d981652ca11e954d88"} Jan 30 06:48:19 crc kubenswrapper[4520]: I0130 06:48:19.775101 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:19 crc kubenswrapper[4520]: I0130 06:48:19.776674 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" event={"ID":"97fba751-b99c-4b44-9ffd-06e6e7344680","Type":"ContainerStarted","Data":"790fb24d02b8d0bc07e5fc6241bfc6dfbe6e2f115807e69d30d32155bdbf2629"} Jan 30 06:48:19 crc kubenswrapper[4520]: I0130 06:48:19.781456 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 06:48:19 crc kubenswrapper[4520]: I0130 06:48:19.805804 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" podStartSLOduration=33.805787428 podStartE2EDuration="33.805787428s" podCreationTimestamp="2026-01-30 06:47:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:48:19.802067665 +0000 UTC m=+213.430419836" watchObservedRunningTime="2026-01-30 06:48:19.805787428 +0000 UTC m=+213.434139599" Jan 30 06:48:27 crc kubenswrapper[4520]: I0130 06:48:27.793338 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:48:27 crc kubenswrapper[4520]: I0130 06:48:27.793920 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:48:27 crc kubenswrapper[4520]: I0130 06:48:27.794084 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" Jan 30 06:48:27 crc kubenswrapper[4520]: I0130 06:48:27.794532 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26"} pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 06:48:27 crc kubenswrapper[4520]: I0130 06:48:27.794591 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" containerID="cri-o://bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26" gracePeriod=600 Jan 30 06:48:28 crc kubenswrapper[4520]: I0130 06:48:28.828654 4520 generic.go:334] "Generic (PLEG): container finished" podID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerID="bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26" exitCode=0 Jan 30 06:48:28 crc kubenswrapper[4520]: I0130 06:48:28.828726 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerDied","Data":"bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26"} Jan 30 06:48:28 crc kubenswrapper[4520]: I0130 06:48:28.829170 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerStarted","Data":"33eb4172918824c12d6f749038eb66206e75b7c9e4ce40339686339e4f47dc36"} Jan 30 06:48:31 crc kubenswrapper[4520]: I0130 06:48:31.698694 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hgth8"] Jan 30 06:48:31 crc kubenswrapper[4520]: I0130 06:48:31.702041 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hgth8" podUID="1186824d-c461-481a-aad1-1e0672b8bcab" containerName="registry-server" containerID="cri-o://62cd30d80eca5132e7221c6a0340d7a489c5badd257cb5f96bc31ad6843830d9" gracePeriod=30 Jan 30 06:48:31 crc kubenswrapper[4520]: I0130 06:48:31.709339 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zm96m"] Jan 30 06:48:31 crc kubenswrapper[4520]: I0130 06:48:31.710814 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zm96m" podUID="1d813745-1351-4573-a0ee-7fd8e3332c6e" containerName="registry-server" containerID="cri-o://51dd7f6e286df9b531aa9d4e6b5e69734f73e74ce5ef50f4c46735fc502c9565" gracePeriod=30 Jan 30 06:48:31 crc kubenswrapper[4520]: I0130 06:48:31.720654 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fd76j"] Jan 30 06:48:31 crc kubenswrapper[4520]: I0130 06:48:31.720848 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-fd76j" podUID="7d200a37-0276-4e2c-b7ef-98107be3f313" containerName="marketplace-operator" containerID="cri-o://4c7a0b73c98789922db0085dbcc6b8d30dd5128a5010abc97c9369dff2443b4e" gracePeriod=30 Jan 30 06:48:31 crc kubenswrapper[4520]: I0130 06:48:31.728720 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q6zxm"] Jan 30 06:48:31 crc kubenswrapper[4520]: I0130 06:48:31.728911 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q6zxm" podUID="f7e7a17d-563e-41ac-ba83-9a513203f5cb" containerName="registry-server" containerID="cri-o://635b90a9ef381c4e7c1b942841f1e2b0a87e760ac9c4b2d313f4cb6a1d534c03" gracePeriod=30 Jan 30 06:48:31 crc kubenswrapper[4520]: I0130 06:48:31.740612 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4kzxr"] Jan 30 06:48:31 crc kubenswrapper[4520]: I0130 06:48:31.740738 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b9tbv"] Jan 30 06:48:31 crc kubenswrapper[4520]: I0130 06:48:31.741081 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4kzxr" podUID="257ef61b-c019-4bea-8449-f5b2f9a27e47" containerName="registry-server" containerID="cri-o://547770012cb8554b2547cecd1726008581635d431e1266bcf441cfe58ba833f7" gracePeriod=30 Jan 30 06:48:31 crc kubenswrapper[4520]: I0130 06:48:31.742059 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-b9tbv" Jan 30 06:48:31 crc kubenswrapper[4520]: I0130 06:48:31.750766 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b9tbv"] Jan 30 06:48:31 crc kubenswrapper[4520]: I0130 06:48:31.870492 4520 generic.go:334] "Generic (PLEG): container finished" podID="7d200a37-0276-4e2c-b7ef-98107be3f313" containerID="4c7a0b73c98789922db0085dbcc6b8d30dd5128a5010abc97c9369dff2443b4e" exitCode=0 Jan 30 06:48:31 crc kubenswrapper[4520]: I0130 06:48:31.870870 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fd76j" event={"ID":"7d200a37-0276-4e2c-b7ef-98107be3f313","Type":"ContainerDied","Data":"4c7a0b73c98789922db0085dbcc6b8d30dd5128a5010abc97c9369dff2443b4e"} Jan 30 06:48:31 crc kubenswrapper[4520]: I0130 06:48:31.878160 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8a370c00-eeac-4281-8793-33a8c2d4b9e2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-b9tbv\" (UID: \"8a370c00-eeac-4281-8793-33a8c2d4b9e2\") " pod="openshift-marketplace/marketplace-operator-79b997595-b9tbv" Jan 30 06:48:31 crc kubenswrapper[4520]: I0130 06:48:31.878238 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a370c00-eeac-4281-8793-33a8c2d4b9e2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-b9tbv\" (UID: \"8a370c00-eeac-4281-8793-33a8c2d4b9e2\") " pod="openshift-marketplace/marketplace-operator-79b997595-b9tbv" Jan 30 06:48:31 crc kubenswrapper[4520]: I0130 06:48:31.878313 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dzm5\" (UniqueName: \"kubernetes.io/projected/8a370c00-eeac-4281-8793-33a8c2d4b9e2-kube-api-access-2dzm5\") pod \"marketplace-operator-79b997595-b9tbv\" (UID: \"8a370c00-eeac-4281-8793-33a8c2d4b9e2\") " pod="openshift-marketplace/marketplace-operator-79b997595-b9tbv" Jan 30 06:48:31 crc kubenswrapper[4520]: I0130 06:48:31.890006 4520 generic.go:334] "Generic (PLEG): container finished" podID="1186824d-c461-481a-aad1-1e0672b8bcab" containerID="62cd30d80eca5132e7221c6a0340d7a489c5badd257cb5f96bc31ad6843830d9" exitCode=0 Jan 30 06:48:31 crc kubenswrapper[4520]: I0130 06:48:31.890093 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgth8" event={"ID":"1186824d-c461-481a-aad1-1e0672b8bcab","Type":"ContainerDied","Data":"62cd30d80eca5132e7221c6a0340d7a489c5badd257cb5f96bc31ad6843830d9"} Jan 30 06:48:31 crc kubenswrapper[4520]: I0130 06:48:31.892172 4520 generic.go:334] "Generic (PLEG): container finished" podID="1d813745-1351-4573-a0ee-7fd8e3332c6e" containerID="51dd7f6e286df9b531aa9d4e6b5e69734f73e74ce5ef50f4c46735fc502c9565" exitCode=0 Jan 30 06:48:31 crc kubenswrapper[4520]: I0130 06:48:31.892229 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zm96m" event={"ID":"1d813745-1351-4573-a0ee-7fd8e3332c6e","Type":"ContainerDied","Data":"51dd7f6e286df9b531aa9d4e6b5e69734f73e74ce5ef50f4c46735fc502c9565"} Jan 30 06:48:31 crc kubenswrapper[4520]: I0130 06:48:31.979877 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dzm5\" (UniqueName: \"kubernetes.io/projected/8a370c00-eeac-4281-8793-33a8c2d4b9e2-kube-api-access-2dzm5\") pod \"marketplace-operator-79b997595-b9tbv\" (UID: \"8a370c00-eeac-4281-8793-33a8c2d4b9e2\") " pod="openshift-marketplace/marketplace-operator-79b997595-b9tbv" Jan 30 06:48:31 crc kubenswrapper[4520]: I0130 06:48:31.979927 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8a370c00-eeac-4281-8793-33a8c2d4b9e2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-b9tbv\" (UID: \"8a370c00-eeac-4281-8793-33a8c2d4b9e2\") " pod="openshift-marketplace/marketplace-operator-79b997595-b9tbv" Jan 30 06:48:31 crc kubenswrapper[4520]: I0130 06:48:31.979992 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a370c00-eeac-4281-8793-33a8c2d4b9e2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-b9tbv\" (UID: \"8a370c00-eeac-4281-8793-33a8c2d4b9e2\") " pod="openshift-marketplace/marketplace-operator-79b997595-b9tbv" Jan 30 06:48:31 crc kubenswrapper[4520]: I0130 06:48:31.992831 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8a370c00-eeac-4281-8793-33a8c2d4b9e2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-b9tbv\" (UID: \"8a370c00-eeac-4281-8793-33a8c2d4b9e2\") " pod="openshift-marketplace/marketplace-operator-79b997595-b9tbv" Jan 30 06:48:31 crc kubenswrapper[4520]: I0130 06:48:31.993176 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a370c00-eeac-4281-8793-33a8c2d4b9e2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-b9tbv\" (UID: \"8a370c00-eeac-4281-8793-33a8c2d4b9e2\") " pod="openshift-marketplace/marketplace-operator-79b997595-b9tbv" Jan 30 06:48:31 crc kubenswrapper[4520]: I0130 06:48:31.995332 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dzm5\" (UniqueName: \"kubernetes.io/projected/8a370c00-eeac-4281-8793-33a8c2d4b9e2-kube-api-access-2dzm5\") pod \"marketplace-operator-79b997595-b9tbv\" (UID: \"8a370c00-eeac-4281-8793-33a8c2d4b9e2\") " pod="openshift-marketplace/marketplace-operator-79b997595-b9tbv" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.062505 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-b9tbv" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.115180 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hgth8" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.181908 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmjvb\" (UniqueName: \"kubernetes.io/projected/1186824d-c461-481a-aad1-1e0672b8bcab-kube-api-access-pmjvb\") pod \"1186824d-c461-481a-aad1-1e0672b8bcab\" (UID: \"1186824d-c461-481a-aad1-1e0672b8bcab\") " Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.182019 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1186824d-c461-481a-aad1-1e0672b8bcab-catalog-content\") pod \"1186824d-c461-481a-aad1-1e0672b8bcab\" (UID: \"1186824d-c461-481a-aad1-1e0672b8bcab\") " Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.182056 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1186824d-c461-481a-aad1-1e0672b8bcab-utilities\") pod \"1186824d-c461-481a-aad1-1e0672b8bcab\" (UID: \"1186824d-c461-481a-aad1-1e0672b8bcab\") " Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.183161 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1186824d-c461-481a-aad1-1e0672b8bcab-utilities" (OuterVolumeSpecName: "utilities") pod "1186824d-c461-481a-aad1-1e0672b8bcab" (UID: "1186824d-c461-481a-aad1-1e0672b8bcab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.194758 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1186824d-c461-481a-aad1-1e0672b8bcab-kube-api-access-pmjvb" (OuterVolumeSpecName: "kube-api-access-pmjvb") pod "1186824d-c461-481a-aad1-1e0672b8bcab" (UID: "1186824d-c461-481a-aad1-1e0672b8bcab"). InnerVolumeSpecName "kube-api-access-pmjvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.234505 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1186824d-c461-481a-aad1-1e0672b8bcab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1186824d-c461-481a-aad1-1e0672b8bcab" (UID: "1186824d-c461-481a-aad1-1e0672b8bcab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.283234 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmjvb\" (UniqueName: \"kubernetes.io/projected/1186824d-c461-481a-aad1-1e0672b8bcab-kube-api-access-pmjvb\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.283339 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1186824d-c461-481a-aad1-1e0672b8bcab-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.283431 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1186824d-c461-481a-aad1-1e0672b8bcab-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.543402 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q6zxm" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.547358 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fd76j" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.551551 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zm96m" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.581542 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4kzxr" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.671060 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b9tbv"] Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.688726 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb8gs\" (UniqueName: \"kubernetes.io/projected/1d813745-1351-4573-a0ee-7fd8e3332c6e-kube-api-access-sb8gs\") pod \"1d813745-1351-4573-a0ee-7fd8e3332c6e\" (UID: \"1d813745-1351-4573-a0ee-7fd8e3332c6e\") " Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.688868 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d813745-1351-4573-a0ee-7fd8e3332c6e-utilities\") pod \"1d813745-1351-4573-a0ee-7fd8e3332c6e\" (UID: \"1d813745-1351-4573-a0ee-7fd8e3332c6e\") " Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.688910 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtvnz\" (UniqueName: \"kubernetes.io/projected/257ef61b-c019-4bea-8449-f5b2f9a27e47-kube-api-access-xtvnz\") pod \"257ef61b-c019-4bea-8449-f5b2f9a27e47\" (UID: \"257ef61b-c019-4bea-8449-f5b2f9a27e47\") " Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.688926 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/257ef61b-c019-4bea-8449-f5b2f9a27e47-catalog-content\") pod \"257ef61b-c019-4bea-8449-f5b2f9a27e47\" (UID: \"257ef61b-c019-4bea-8449-f5b2f9a27e47\") " Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.688968 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/257ef61b-c019-4bea-8449-f5b2f9a27e47-utilities\") pod \"257ef61b-c019-4bea-8449-f5b2f9a27e47\" (UID: \"257ef61b-c019-4bea-8449-f5b2f9a27e47\") " Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.689030 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7d200a37-0276-4e2c-b7ef-98107be3f313-marketplace-operator-metrics\") pod \"7d200a37-0276-4e2c-b7ef-98107be3f313\" (UID: \"7d200a37-0276-4e2c-b7ef-98107be3f313\") " Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.689093 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d200a37-0276-4e2c-b7ef-98107be3f313-marketplace-trusted-ca\") pod \"7d200a37-0276-4e2c-b7ef-98107be3f313\" (UID: \"7d200a37-0276-4e2c-b7ef-98107be3f313\") " Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.689114 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d813745-1351-4573-a0ee-7fd8e3332c6e-catalog-content\") pod \"1d813745-1351-4573-a0ee-7fd8e3332c6e\" (UID: \"1d813745-1351-4573-a0ee-7fd8e3332c6e\") " Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.689139 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7e7a17d-563e-41ac-ba83-9a513203f5cb-catalog-content\") pod \"f7e7a17d-563e-41ac-ba83-9a513203f5cb\" (UID: \"f7e7a17d-563e-41ac-ba83-9a513203f5cb\") " Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.689174 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccqqt\" (UniqueName: \"kubernetes.io/projected/7d200a37-0276-4e2c-b7ef-98107be3f313-kube-api-access-ccqqt\") pod \"7d200a37-0276-4e2c-b7ef-98107be3f313\" (UID: \"7d200a37-0276-4e2c-b7ef-98107be3f313\") " Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.689209 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2vfx\" (UniqueName: \"kubernetes.io/projected/f7e7a17d-563e-41ac-ba83-9a513203f5cb-kube-api-access-m2vfx\") pod \"f7e7a17d-563e-41ac-ba83-9a513203f5cb\" (UID: \"f7e7a17d-563e-41ac-ba83-9a513203f5cb\") " Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.689272 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7e7a17d-563e-41ac-ba83-9a513203f5cb-utilities\") pod \"f7e7a17d-563e-41ac-ba83-9a513203f5cb\" (UID: \"f7e7a17d-563e-41ac-ba83-9a513203f5cb\") " Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.691562 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e7a17d-563e-41ac-ba83-9a513203f5cb-utilities" (OuterVolumeSpecName: "utilities") pod "f7e7a17d-563e-41ac-ba83-9a513203f5cb" (UID: "f7e7a17d-563e-41ac-ba83-9a513203f5cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.695646 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d200a37-0276-4e2c-b7ef-98107be3f313-kube-api-access-ccqqt" (OuterVolumeSpecName: "kube-api-access-ccqqt") pod "7d200a37-0276-4e2c-b7ef-98107be3f313" (UID: "7d200a37-0276-4e2c-b7ef-98107be3f313"). InnerVolumeSpecName "kube-api-access-ccqqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.696263 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d200a37-0276-4e2c-b7ef-98107be3f313-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "7d200a37-0276-4e2c-b7ef-98107be3f313" (UID: "7d200a37-0276-4e2c-b7ef-98107be3f313"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.697399 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d200a37-0276-4e2c-b7ef-98107be3f313-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "7d200a37-0276-4e2c-b7ef-98107be3f313" (UID: "7d200a37-0276-4e2c-b7ef-98107be3f313"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.697524 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/257ef61b-c019-4bea-8449-f5b2f9a27e47-kube-api-access-xtvnz" (OuterVolumeSpecName: "kube-api-access-xtvnz") pod "257ef61b-c019-4bea-8449-f5b2f9a27e47" (UID: "257ef61b-c019-4bea-8449-f5b2f9a27e47"). InnerVolumeSpecName "kube-api-access-xtvnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.697988 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/257ef61b-c019-4bea-8449-f5b2f9a27e47-utilities" (OuterVolumeSpecName: "utilities") pod "257ef61b-c019-4bea-8449-f5b2f9a27e47" (UID: "257ef61b-c019-4bea-8449-f5b2f9a27e47"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.698595 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d813745-1351-4573-a0ee-7fd8e3332c6e-kube-api-access-sb8gs" (OuterVolumeSpecName: "kube-api-access-sb8gs") pod "1d813745-1351-4573-a0ee-7fd8e3332c6e" (UID: "1d813745-1351-4573-a0ee-7fd8e3332c6e"). InnerVolumeSpecName "kube-api-access-sb8gs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.698961 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d813745-1351-4573-a0ee-7fd8e3332c6e-utilities" (OuterVolumeSpecName: "utilities") pod "1d813745-1351-4573-a0ee-7fd8e3332c6e" (UID: "1d813745-1351-4573-a0ee-7fd8e3332c6e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.701138 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e7a17d-563e-41ac-ba83-9a513203f5cb-kube-api-access-m2vfx" (OuterVolumeSpecName: "kube-api-access-m2vfx") pod "f7e7a17d-563e-41ac-ba83-9a513203f5cb" (UID: "f7e7a17d-563e-41ac-ba83-9a513203f5cb"). InnerVolumeSpecName "kube-api-access-m2vfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.717927 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e7a17d-563e-41ac-ba83-9a513203f5cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7e7a17d-563e-41ac-ba83-9a513203f5cb" (UID: "f7e7a17d-563e-41ac-ba83-9a513203f5cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.768643 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d813745-1351-4573-a0ee-7fd8e3332c6e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d813745-1351-4573-a0ee-7fd8e3332c6e" (UID: "1d813745-1351-4573-a0ee-7fd8e3332c6e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.791340 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtvnz\" (UniqueName: \"kubernetes.io/projected/257ef61b-c019-4bea-8449-f5b2f9a27e47-kube-api-access-xtvnz\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.791370 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/257ef61b-c019-4bea-8449-f5b2f9a27e47-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.791400 4520 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7d200a37-0276-4e2c-b7ef-98107be3f313-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.791411 4520 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d200a37-0276-4e2c-b7ef-98107be3f313-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.791421 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d813745-1351-4573-a0ee-7fd8e3332c6e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.791430 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7e7a17d-563e-41ac-ba83-9a513203f5cb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.791438 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccqqt\" (UniqueName: \"kubernetes.io/projected/7d200a37-0276-4e2c-b7ef-98107be3f313-kube-api-access-ccqqt\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.791447 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2vfx\" (UniqueName: \"kubernetes.io/projected/f7e7a17d-563e-41ac-ba83-9a513203f5cb-kube-api-access-m2vfx\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.791470 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7e7a17d-563e-41ac-ba83-9a513203f5cb-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.791480 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb8gs\" (UniqueName: \"kubernetes.io/projected/1d813745-1351-4573-a0ee-7fd8e3332c6e-kube-api-access-sb8gs\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.791489 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d813745-1351-4573-a0ee-7fd8e3332c6e-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.828441 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/257ef61b-c019-4bea-8449-f5b2f9a27e47-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "257ef61b-c019-4bea-8449-f5b2f9a27e47" (UID: "257ef61b-c019-4bea-8449-f5b2f9a27e47"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.892761 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/257ef61b-c019-4bea-8449-f5b2f9a27e47-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.897764 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fd76j" event={"ID":"7d200a37-0276-4e2c-b7ef-98107be3f313","Type":"ContainerDied","Data":"03578b1535977c81d8c7d50565409738cf3702af5bfc98a9fb07ec04c6c7fdbc"} Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.897820 4520 scope.go:117] "RemoveContainer" containerID="4c7a0b73c98789922db0085dbcc6b8d30dd5128a5010abc97c9369dff2443b4e" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.897942 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fd76j" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.901182 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-b9tbv" event={"ID":"8a370c00-eeac-4281-8793-33a8c2d4b9e2","Type":"ContainerStarted","Data":"ee91398acbede99a18c42f9f59c83c048aee4ac1c05efb2c5540ac7e734f4048"} Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.901232 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-b9tbv" event={"ID":"8a370c00-eeac-4281-8793-33a8c2d4b9e2","Type":"ContainerStarted","Data":"2c346eab8a8247db11f3d7cfbc4c9f88218e092a117ff98de6b1d8d47fa6b987"} Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.901460 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-b9tbv" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.903218 4520 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-b9tbv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.62:8080/healthz\": dial tcp 10.217.0.62:8080: connect: connection refused" start-of-body= Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.903276 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-b9tbv" podUID="8a370c00-eeac-4281-8793-33a8c2d4b9e2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.62:8080/healthz\": dial tcp 10.217.0.62:8080: connect: connection refused" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.905590 4520 generic.go:334] "Generic (PLEG): container finished" podID="257ef61b-c019-4bea-8449-f5b2f9a27e47" containerID="547770012cb8554b2547cecd1726008581635d431e1266bcf441cfe58ba833f7" exitCode=0 Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.905667 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4kzxr" event={"ID":"257ef61b-c019-4bea-8449-f5b2f9a27e47","Type":"ContainerDied","Data":"547770012cb8554b2547cecd1726008581635d431e1266bcf441cfe58ba833f7"} Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.905704 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4kzxr" event={"ID":"257ef61b-c019-4bea-8449-f5b2f9a27e47","Type":"ContainerDied","Data":"eee2c81f31ba9368eef578fcf980d1e4f5223129c5c78c6a521a2bdf226925a6"} Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.905785 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4kzxr" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.912066 4520 scope.go:117] "RemoveContainer" containerID="547770012cb8554b2547cecd1726008581635d431e1266bcf441cfe58ba833f7" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.913867 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgth8" event={"ID":"1186824d-c461-481a-aad1-1e0672b8bcab","Type":"ContainerDied","Data":"785f6ee841d8b37c019fba4c0c4bd1be68868cb319408154f45601efc638ab5e"} Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.913885 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hgth8" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.922568 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zm96m" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.922812 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zm96m" event={"ID":"1d813745-1351-4573-a0ee-7fd8e3332c6e","Type":"ContainerDied","Data":"540b33033c827a2020f996d71972f9215319d92f6f4f49ff3e9cbf6f9d3072a4"} Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.923196 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fd76j"] Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.925330 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fd76j"] Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.927271 4520 generic.go:334] "Generic (PLEG): container finished" podID="f7e7a17d-563e-41ac-ba83-9a513203f5cb" containerID="635b90a9ef381c4e7c1b942841f1e2b0a87e760ac9c4b2d313f4cb6a1d534c03" exitCode=0 Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.927305 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q6zxm" event={"ID":"f7e7a17d-563e-41ac-ba83-9a513203f5cb","Type":"ContainerDied","Data":"635b90a9ef381c4e7c1b942841f1e2b0a87e760ac9c4b2d313f4cb6a1d534c03"} Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.927327 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q6zxm" event={"ID":"f7e7a17d-563e-41ac-ba83-9a513203f5cb","Type":"ContainerDied","Data":"4cb600f7ef803b2ad9b6c559662233d2fab32ee402f584f138f423e2ec6a7d50"} Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.927386 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q6zxm" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.928185 4520 scope.go:117] "RemoveContainer" containerID="8b7e63ac17122eeb5d84be81ce88bccd43d4dc0b0dc64afc7bb4479502c141db" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.945555 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-b9tbv" podStartSLOduration=1.945541414 podStartE2EDuration="1.945541414s" podCreationTimestamp="2026-01-30 06:48:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:48:32.936824304 +0000 UTC m=+226.565176485" watchObservedRunningTime="2026-01-30 06:48:32.945541414 +0000 UTC m=+226.573893585" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.954309 4520 scope.go:117] "RemoveContainer" containerID="7fd20d2c083e9ff5eb2dd1f0670a8b0abd7ebe9092e520ad37a47b753f0155d5" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.963183 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4kzxr"] Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.963854 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4kzxr"] Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.969559 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zm96m"] Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.973210 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zm96m"] Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.978651 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hgth8"] Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.984242 4520 scope.go:117] "RemoveContainer" containerID="547770012cb8554b2547cecd1726008581635d431e1266bcf441cfe58ba833f7" Jan 30 06:48:32 crc kubenswrapper[4520]: E0130 06:48:32.984963 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"547770012cb8554b2547cecd1726008581635d431e1266bcf441cfe58ba833f7\": container with ID starting with 547770012cb8554b2547cecd1726008581635d431e1266bcf441cfe58ba833f7 not found: ID does not exist" containerID="547770012cb8554b2547cecd1726008581635d431e1266bcf441cfe58ba833f7" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.984995 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"547770012cb8554b2547cecd1726008581635d431e1266bcf441cfe58ba833f7"} err="failed to get container status \"547770012cb8554b2547cecd1726008581635d431e1266bcf441cfe58ba833f7\": rpc error: code = NotFound desc = could not find container \"547770012cb8554b2547cecd1726008581635d431e1266bcf441cfe58ba833f7\": container with ID starting with 547770012cb8554b2547cecd1726008581635d431e1266bcf441cfe58ba833f7 not found: ID does not exist" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.985023 4520 scope.go:117] "RemoveContainer" containerID="8b7e63ac17122eeb5d84be81ce88bccd43d4dc0b0dc64afc7bb4479502c141db" Jan 30 06:48:32 crc kubenswrapper[4520]: E0130 06:48:32.985368 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b7e63ac17122eeb5d84be81ce88bccd43d4dc0b0dc64afc7bb4479502c141db\": container with ID starting with 8b7e63ac17122eeb5d84be81ce88bccd43d4dc0b0dc64afc7bb4479502c141db not found: ID does not exist" containerID="8b7e63ac17122eeb5d84be81ce88bccd43d4dc0b0dc64afc7bb4479502c141db" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.985416 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b7e63ac17122eeb5d84be81ce88bccd43d4dc0b0dc64afc7bb4479502c141db"} err="failed to get container status \"8b7e63ac17122eeb5d84be81ce88bccd43d4dc0b0dc64afc7bb4479502c141db\": rpc error: code = NotFound desc = could not find container \"8b7e63ac17122eeb5d84be81ce88bccd43d4dc0b0dc64afc7bb4479502c141db\": container with ID starting with 8b7e63ac17122eeb5d84be81ce88bccd43d4dc0b0dc64afc7bb4479502c141db not found: ID does not exist" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.985447 4520 scope.go:117] "RemoveContainer" containerID="7fd20d2c083e9ff5eb2dd1f0670a8b0abd7ebe9092e520ad37a47b753f0155d5" Jan 30 06:48:32 crc kubenswrapper[4520]: E0130 06:48:32.985811 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fd20d2c083e9ff5eb2dd1f0670a8b0abd7ebe9092e520ad37a47b753f0155d5\": container with ID starting with 7fd20d2c083e9ff5eb2dd1f0670a8b0abd7ebe9092e520ad37a47b753f0155d5 not found: ID does not exist" containerID="7fd20d2c083e9ff5eb2dd1f0670a8b0abd7ebe9092e520ad37a47b753f0155d5" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.985836 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fd20d2c083e9ff5eb2dd1f0670a8b0abd7ebe9092e520ad37a47b753f0155d5"} err="failed to get container status \"7fd20d2c083e9ff5eb2dd1f0670a8b0abd7ebe9092e520ad37a47b753f0155d5\": rpc error: code = NotFound desc = could not find container \"7fd20d2c083e9ff5eb2dd1f0670a8b0abd7ebe9092e520ad37a47b753f0155d5\": container with ID starting with 7fd20d2c083e9ff5eb2dd1f0670a8b0abd7ebe9092e520ad37a47b753f0155d5 not found: ID does not exist" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.985854 4520 scope.go:117] "RemoveContainer" containerID="62cd30d80eca5132e7221c6a0340d7a489c5badd257cb5f96bc31ad6843830d9" Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.988233 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hgth8"] Jan 30 06:48:32 crc kubenswrapper[4520]: I0130 06:48:32.996322 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q6zxm"] Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.000012 4520 scope.go:117] "RemoveContainer" containerID="1239fb4b1561a8c3361664d7a10b23cba47b66b09b7047547cfd7544088f96ab" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.004231 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q6zxm"] Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.020946 4520 scope.go:117] "RemoveContainer" containerID="b7e94e61c2a8064f315a5b44901790d379c7b67c1e3e93742e488093f2614e0d" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.040094 4520 scope.go:117] "RemoveContainer" containerID="51dd7f6e286df9b531aa9d4e6b5e69734f73e74ce5ef50f4c46735fc502c9565" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.055033 4520 scope.go:117] "RemoveContainer" containerID="5c68cb236b2bb35179c551ce58b04009fc68a482cf7c683e8c2240b0a065d7d6" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.073325 4520 scope.go:117] "RemoveContainer" containerID="5daf42e469a3d335f07f26f20ca6ecaa11aaa680b5de0e02585444e0aa84e701" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.085189 4520 scope.go:117] "RemoveContainer" containerID="635b90a9ef381c4e7c1b942841f1e2b0a87e760ac9c4b2d313f4cb6a1d534c03" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.097382 4520 scope.go:117] "RemoveContainer" containerID="13de701030ef336c4122d89cfb8ce1f5dc2d5e442a20c41e941818c62770710f" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.111254 4520 scope.go:117] "RemoveContainer" containerID="8e880cb5422b892e84ce554866c531bf390013b29368b8487eeb4c5a9f16b468" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.125972 4520 scope.go:117] "RemoveContainer" containerID="635b90a9ef381c4e7c1b942841f1e2b0a87e760ac9c4b2d313f4cb6a1d534c03" Jan 30 06:48:33 crc kubenswrapper[4520]: E0130 06:48:33.126346 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"635b90a9ef381c4e7c1b942841f1e2b0a87e760ac9c4b2d313f4cb6a1d534c03\": container with ID starting with 635b90a9ef381c4e7c1b942841f1e2b0a87e760ac9c4b2d313f4cb6a1d534c03 not found: ID does not exist" containerID="635b90a9ef381c4e7c1b942841f1e2b0a87e760ac9c4b2d313f4cb6a1d534c03" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.126447 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"635b90a9ef381c4e7c1b942841f1e2b0a87e760ac9c4b2d313f4cb6a1d534c03"} err="failed to get container status \"635b90a9ef381c4e7c1b942841f1e2b0a87e760ac9c4b2d313f4cb6a1d534c03\": rpc error: code = NotFound desc = could not find container \"635b90a9ef381c4e7c1b942841f1e2b0a87e760ac9c4b2d313f4cb6a1d534c03\": container with ID starting with 635b90a9ef381c4e7c1b942841f1e2b0a87e760ac9c4b2d313f4cb6a1d534c03 not found: ID does not exist" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.126562 4520 scope.go:117] "RemoveContainer" containerID="13de701030ef336c4122d89cfb8ce1f5dc2d5e442a20c41e941818c62770710f" Jan 30 06:48:33 crc kubenswrapper[4520]: E0130 06:48:33.126930 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13de701030ef336c4122d89cfb8ce1f5dc2d5e442a20c41e941818c62770710f\": container with ID starting with 13de701030ef336c4122d89cfb8ce1f5dc2d5e442a20c41e941818c62770710f not found: ID does not exist" containerID="13de701030ef336c4122d89cfb8ce1f5dc2d5e442a20c41e941818c62770710f" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.126975 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13de701030ef336c4122d89cfb8ce1f5dc2d5e442a20c41e941818c62770710f"} err="failed to get container status \"13de701030ef336c4122d89cfb8ce1f5dc2d5e442a20c41e941818c62770710f\": rpc error: code = NotFound desc = could not find container \"13de701030ef336c4122d89cfb8ce1f5dc2d5e442a20c41e941818c62770710f\": container with ID starting with 13de701030ef336c4122d89cfb8ce1f5dc2d5e442a20c41e941818c62770710f not found: ID does not exist" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.127008 4520 scope.go:117] "RemoveContainer" containerID="8e880cb5422b892e84ce554866c531bf390013b29368b8487eeb4c5a9f16b468" Jan 30 06:48:33 crc kubenswrapper[4520]: E0130 06:48:33.127280 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e880cb5422b892e84ce554866c531bf390013b29368b8487eeb4c5a9f16b468\": container with ID starting with 8e880cb5422b892e84ce554866c531bf390013b29368b8487eeb4c5a9f16b468 not found: ID does not exist" containerID="8e880cb5422b892e84ce554866c531bf390013b29368b8487eeb4c5a9f16b468" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.127309 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e880cb5422b892e84ce554866c531bf390013b29368b8487eeb4c5a9f16b468"} err="failed to get container status \"8e880cb5422b892e84ce554866c531bf390013b29368b8487eeb4c5a9f16b468\": rpc error: code = NotFound desc = could not find container \"8e880cb5422b892e84ce554866c531bf390013b29368b8487eeb4c5a9f16b468\": container with ID starting with 8e880cb5422b892e84ce554866c531bf390013b29368b8487eeb4c5a9f16b468 not found: ID does not exist" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.719532 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x6w7h"] Jan 30 06:48:33 crc kubenswrapper[4520]: E0130 06:48:33.719790 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7e7a17d-563e-41ac-ba83-9a513203f5cb" containerName="registry-server" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.719805 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7e7a17d-563e-41ac-ba83-9a513203f5cb" containerName="registry-server" Jan 30 06:48:33 crc kubenswrapper[4520]: E0130 06:48:33.719816 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="257ef61b-c019-4bea-8449-f5b2f9a27e47" containerName="registry-server" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.719822 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="257ef61b-c019-4bea-8449-f5b2f9a27e47" containerName="registry-server" Jan 30 06:48:33 crc kubenswrapper[4520]: E0130 06:48:33.719831 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1186824d-c461-481a-aad1-1e0672b8bcab" containerName="extract-utilities" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.719837 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="1186824d-c461-481a-aad1-1e0672b8bcab" containerName="extract-utilities" Jan 30 06:48:33 crc kubenswrapper[4520]: E0130 06:48:33.719846 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7e7a17d-563e-41ac-ba83-9a513203f5cb" containerName="extract-utilities" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.719852 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7e7a17d-563e-41ac-ba83-9a513203f5cb" containerName="extract-utilities" Jan 30 06:48:33 crc kubenswrapper[4520]: E0130 06:48:33.719860 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="257ef61b-c019-4bea-8449-f5b2f9a27e47" containerName="extract-utilities" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.719866 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="257ef61b-c019-4bea-8449-f5b2f9a27e47" containerName="extract-utilities" Jan 30 06:48:33 crc kubenswrapper[4520]: E0130 06:48:33.719875 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7e7a17d-563e-41ac-ba83-9a513203f5cb" containerName="extract-content" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.719882 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7e7a17d-563e-41ac-ba83-9a513203f5cb" containerName="extract-content" Jan 30 06:48:33 crc kubenswrapper[4520]: E0130 06:48:33.719895 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d813745-1351-4573-a0ee-7fd8e3332c6e" containerName="extract-utilities" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.719901 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d813745-1351-4573-a0ee-7fd8e3332c6e" containerName="extract-utilities" Jan 30 06:48:33 crc kubenswrapper[4520]: E0130 06:48:33.719910 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d813745-1351-4573-a0ee-7fd8e3332c6e" containerName="extract-content" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.719919 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d813745-1351-4573-a0ee-7fd8e3332c6e" containerName="extract-content" Jan 30 06:48:33 crc kubenswrapper[4520]: E0130 06:48:33.719928 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d200a37-0276-4e2c-b7ef-98107be3f313" containerName="marketplace-operator" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.719934 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d200a37-0276-4e2c-b7ef-98107be3f313" containerName="marketplace-operator" Jan 30 06:48:33 crc kubenswrapper[4520]: E0130 06:48:33.719942 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d813745-1351-4573-a0ee-7fd8e3332c6e" containerName="registry-server" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.719947 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d813745-1351-4573-a0ee-7fd8e3332c6e" containerName="registry-server" Jan 30 06:48:33 crc kubenswrapper[4520]: E0130 06:48:33.719957 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="257ef61b-c019-4bea-8449-f5b2f9a27e47" containerName="extract-content" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.719962 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="257ef61b-c019-4bea-8449-f5b2f9a27e47" containerName="extract-content" Jan 30 06:48:33 crc kubenswrapper[4520]: E0130 06:48:33.719971 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1186824d-c461-481a-aad1-1e0672b8bcab" containerName="registry-server" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.719977 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="1186824d-c461-481a-aad1-1e0672b8bcab" containerName="registry-server" Jan 30 06:48:33 crc kubenswrapper[4520]: E0130 06:48:33.719984 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1186824d-c461-481a-aad1-1e0672b8bcab" containerName="extract-content" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.719991 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="1186824d-c461-481a-aad1-1e0672b8bcab" containerName="extract-content" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.720072 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="257ef61b-c019-4bea-8449-f5b2f9a27e47" containerName="registry-server" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.720084 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d813745-1351-4573-a0ee-7fd8e3332c6e" containerName="registry-server" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.720094 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d200a37-0276-4e2c-b7ef-98107be3f313" containerName="marketplace-operator" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.720101 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7e7a17d-563e-41ac-ba83-9a513203f5cb" containerName="registry-server" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.720110 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="1186824d-c461-481a-aad1-1e0672b8bcab" containerName="registry-server" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.722575 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x6w7h" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.724930 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.735731 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x6w7h"] Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.807025 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/579f5440-0003-4f0c-b5b1-cf8b477cf9e4-utilities\") pod \"redhat-marketplace-x6w7h\" (UID: \"579f5440-0003-4f0c-b5b1-cf8b477cf9e4\") " pod="openshift-marketplace/redhat-marketplace-x6w7h" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.807077 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/579f5440-0003-4f0c-b5b1-cf8b477cf9e4-catalog-content\") pod \"redhat-marketplace-x6w7h\" (UID: \"579f5440-0003-4f0c-b5b1-cf8b477cf9e4\") " pod="openshift-marketplace/redhat-marketplace-x6w7h" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.807105 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf5bp\" (UniqueName: \"kubernetes.io/projected/579f5440-0003-4f0c-b5b1-cf8b477cf9e4-kube-api-access-jf5bp\") pod \"redhat-marketplace-x6w7h\" (UID: \"579f5440-0003-4f0c-b5b1-cf8b477cf9e4\") " pod="openshift-marketplace/redhat-marketplace-x6w7h" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.908130 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/579f5440-0003-4f0c-b5b1-cf8b477cf9e4-utilities\") pod \"redhat-marketplace-x6w7h\" (UID: \"579f5440-0003-4f0c-b5b1-cf8b477cf9e4\") " pod="openshift-marketplace/redhat-marketplace-x6w7h" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.908188 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/579f5440-0003-4f0c-b5b1-cf8b477cf9e4-catalog-content\") pod \"redhat-marketplace-x6w7h\" (UID: \"579f5440-0003-4f0c-b5b1-cf8b477cf9e4\") " pod="openshift-marketplace/redhat-marketplace-x6w7h" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.908220 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf5bp\" (UniqueName: \"kubernetes.io/projected/579f5440-0003-4f0c-b5b1-cf8b477cf9e4-kube-api-access-jf5bp\") pod \"redhat-marketplace-x6w7h\" (UID: \"579f5440-0003-4f0c-b5b1-cf8b477cf9e4\") " pod="openshift-marketplace/redhat-marketplace-x6w7h" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.908733 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/579f5440-0003-4f0c-b5b1-cf8b477cf9e4-utilities\") pod \"redhat-marketplace-x6w7h\" (UID: \"579f5440-0003-4f0c-b5b1-cf8b477cf9e4\") " pod="openshift-marketplace/redhat-marketplace-x6w7h" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.909116 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/579f5440-0003-4f0c-b5b1-cf8b477cf9e4-catalog-content\") pod \"redhat-marketplace-x6w7h\" (UID: \"579f5440-0003-4f0c-b5b1-cf8b477cf9e4\") " pod="openshift-marketplace/redhat-marketplace-x6w7h" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.929507 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf5bp\" (UniqueName: \"kubernetes.io/projected/579f5440-0003-4f0c-b5b1-cf8b477cf9e4-kube-api-access-jf5bp\") pod \"redhat-marketplace-x6w7h\" (UID: \"579f5440-0003-4f0c-b5b1-cf8b477cf9e4\") " pod="openshift-marketplace/redhat-marketplace-x6w7h" Jan 30 06:48:33 crc kubenswrapper[4520]: I0130 06:48:33.940193 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-b9tbv" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.040454 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x6w7h" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.228883 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5696d4555f-j6m67"] Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.229404 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5696d4555f-j6m67" podUID="e6654e22-693d-4ffb-9fa9-56a1d7133c35" containerName="controller-manager" containerID="cri-o://1596ebb3cc887d60a4ab5d787b74ddef4f1cc244cf86e9b489e58bdb706d2fc0" gracePeriod=30 Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.317508 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kln7p"] Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.318956 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kln7p" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.324098 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.333688 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cf4668589-ntkwf"] Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.333874 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6cf4668589-ntkwf" podUID="1eac717c-d126-4e2d-8bae-ef99f07ac430" containerName="route-controller-manager" containerID="cri-o://ddad5cbebd15c88b2ed85302473c797328f58f4a8194c00965529132b1e1e92f" gracePeriod=30 Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.342437 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kln7p"] Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.414173 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x6w7h"] Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.414608 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6012817b-5b3e-49bd-9dfd-27886e0136fe-utilities\") pod \"certified-operators-kln7p\" (UID: \"6012817b-5b3e-49bd-9dfd-27886e0136fe\") " pod="openshift-marketplace/certified-operators-kln7p" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.414659 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6012817b-5b3e-49bd-9dfd-27886e0136fe-catalog-content\") pod \"certified-operators-kln7p\" (UID: \"6012817b-5b3e-49bd-9dfd-27886e0136fe\") " pod="openshift-marketplace/certified-operators-kln7p" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.414720 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkzr2\" (UniqueName: \"kubernetes.io/projected/6012817b-5b3e-49bd-9dfd-27886e0136fe-kube-api-access-nkzr2\") pod \"certified-operators-kln7p\" (UID: \"6012817b-5b3e-49bd-9dfd-27886e0136fe\") " pod="openshift-marketplace/certified-operators-kln7p" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.515348 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6012817b-5b3e-49bd-9dfd-27886e0136fe-utilities\") pod \"certified-operators-kln7p\" (UID: \"6012817b-5b3e-49bd-9dfd-27886e0136fe\") " pod="openshift-marketplace/certified-operators-kln7p" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.515395 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6012817b-5b3e-49bd-9dfd-27886e0136fe-catalog-content\") pod \"certified-operators-kln7p\" (UID: \"6012817b-5b3e-49bd-9dfd-27886e0136fe\") " pod="openshift-marketplace/certified-operators-kln7p" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.515427 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkzr2\" (UniqueName: \"kubernetes.io/projected/6012817b-5b3e-49bd-9dfd-27886e0136fe-kube-api-access-nkzr2\") pod \"certified-operators-kln7p\" (UID: \"6012817b-5b3e-49bd-9dfd-27886e0136fe\") " pod="openshift-marketplace/certified-operators-kln7p" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.516138 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6012817b-5b3e-49bd-9dfd-27886e0136fe-utilities\") pod \"certified-operators-kln7p\" (UID: \"6012817b-5b3e-49bd-9dfd-27886e0136fe\") " pod="openshift-marketplace/certified-operators-kln7p" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.516188 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6012817b-5b3e-49bd-9dfd-27886e0136fe-catalog-content\") pod \"certified-operators-kln7p\" (UID: \"6012817b-5b3e-49bd-9dfd-27886e0136fe\") " pod="openshift-marketplace/certified-operators-kln7p" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.534239 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkzr2\" (UniqueName: \"kubernetes.io/projected/6012817b-5b3e-49bd-9dfd-27886e0136fe-kube-api-access-nkzr2\") pod \"certified-operators-kln7p\" (UID: \"6012817b-5b3e-49bd-9dfd-27886e0136fe\") " pod="openshift-marketplace/certified-operators-kln7p" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.636098 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kln7p" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.698550 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1186824d-c461-481a-aad1-1e0672b8bcab" path="/var/lib/kubelet/pods/1186824d-c461-481a-aad1-1e0672b8bcab/volumes" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.699411 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d813745-1351-4573-a0ee-7fd8e3332c6e" path="/var/lib/kubelet/pods/1d813745-1351-4573-a0ee-7fd8e3332c6e/volumes" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.700092 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="257ef61b-c019-4bea-8449-f5b2f9a27e47" path="/var/lib/kubelet/pods/257ef61b-c019-4bea-8449-f5b2f9a27e47/volumes" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.701295 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d200a37-0276-4e2c-b7ef-98107be3f313" path="/var/lib/kubelet/pods/7d200a37-0276-4e2c-b7ef-98107be3f313/volumes" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.701737 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e7a17d-563e-41ac-ba83-9a513203f5cb" path="/var/lib/kubelet/pods/f7e7a17d-563e-41ac-ba83-9a513203f5cb/volumes" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.792220 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cf4668589-ntkwf" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.822414 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1eac717c-d126-4e2d-8bae-ef99f07ac430-serving-cert\") pod \"1eac717c-d126-4e2d-8bae-ef99f07ac430\" (UID: \"1eac717c-d126-4e2d-8bae-ef99f07ac430\") " Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.822734 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64h2h\" (UniqueName: \"kubernetes.io/projected/1eac717c-d126-4e2d-8bae-ef99f07ac430-kube-api-access-64h2h\") pod \"1eac717c-d126-4e2d-8bae-ef99f07ac430\" (UID: \"1eac717c-d126-4e2d-8bae-ef99f07ac430\") " Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.822943 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eac717c-d126-4e2d-8bae-ef99f07ac430-config\") pod \"1eac717c-d126-4e2d-8bae-ef99f07ac430\" (UID: \"1eac717c-d126-4e2d-8bae-ef99f07ac430\") " Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.822969 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1eac717c-d126-4e2d-8bae-ef99f07ac430-client-ca\") pod \"1eac717c-d126-4e2d-8bae-ef99f07ac430\" (UID: \"1eac717c-d126-4e2d-8bae-ef99f07ac430\") " Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.824207 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1eac717c-d126-4e2d-8bae-ef99f07ac430-config" (OuterVolumeSpecName: "config") pod "1eac717c-d126-4e2d-8bae-ef99f07ac430" (UID: "1eac717c-d126-4e2d-8bae-ef99f07ac430"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.824299 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1eac717c-d126-4e2d-8bae-ef99f07ac430-client-ca" (OuterVolumeSpecName: "client-ca") pod "1eac717c-d126-4e2d-8bae-ef99f07ac430" (UID: "1eac717c-d126-4e2d-8bae-ef99f07ac430"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.830912 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1eac717c-d126-4e2d-8bae-ef99f07ac430-kube-api-access-64h2h" (OuterVolumeSpecName: "kube-api-access-64h2h") pod "1eac717c-d126-4e2d-8bae-ef99f07ac430" (UID: "1eac717c-d126-4e2d-8bae-ef99f07ac430"). InnerVolumeSpecName "kube-api-access-64h2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.836290 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eac717c-d126-4e2d-8bae-ef99f07ac430-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1eac717c-d126-4e2d-8bae-ef99f07ac430" (UID: "1eac717c-d126-4e2d-8bae-ef99f07ac430"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.914900 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5696d4555f-j6m67" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.924303 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6654e22-693d-4ffb-9fa9-56a1d7133c35-client-ca\") pod \"e6654e22-693d-4ffb-9fa9-56a1d7133c35\" (UID: \"e6654e22-693d-4ffb-9fa9-56a1d7133c35\") " Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.924378 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6654e22-693d-4ffb-9fa9-56a1d7133c35-serving-cert\") pod \"e6654e22-693d-4ffb-9fa9-56a1d7133c35\" (UID: \"e6654e22-693d-4ffb-9fa9-56a1d7133c35\") " Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.924428 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6654e22-693d-4ffb-9fa9-56a1d7133c35-config\") pod \"e6654e22-693d-4ffb-9fa9-56a1d7133c35\" (UID: \"e6654e22-693d-4ffb-9fa9-56a1d7133c35\") " Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.924466 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rzzx\" (UniqueName: \"kubernetes.io/projected/e6654e22-693d-4ffb-9fa9-56a1d7133c35-kube-api-access-7rzzx\") pod \"e6654e22-693d-4ffb-9fa9-56a1d7133c35\" (UID: \"e6654e22-693d-4ffb-9fa9-56a1d7133c35\") " Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.924498 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6654e22-693d-4ffb-9fa9-56a1d7133c35-proxy-ca-bundles\") pod \"e6654e22-693d-4ffb-9fa9-56a1d7133c35\" (UID: \"e6654e22-693d-4ffb-9fa9-56a1d7133c35\") " Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.924820 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64h2h\" (UniqueName: \"kubernetes.io/projected/1eac717c-d126-4e2d-8bae-ef99f07ac430-kube-api-access-64h2h\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.924840 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eac717c-d126-4e2d-8bae-ef99f07ac430-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.924853 4520 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1eac717c-d126-4e2d-8bae-ef99f07ac430-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.924864 4520 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1eac717c-d126-4e2d-8bae-ef99f07ac430-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.924868 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6654e22-693d-4ffb-9fa9-56a1d7133c35-client-ca" (OuterVolumeSpecName: "client-ca") pod "e6654e22-693d-4ffb-9fa9-56a1d7133c35" (UID: "e6654e22-693d-4ffb-9fa9-56a1d7133c35"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.925250 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6654e22-693d-4ffb-9fa9-56a1d7133c35-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e6654e22-693d-4ffb-9fa9-56a1d7133c35" (UID: "e6654e22-693d-4ffb-9fa9-56a1d7133c35"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.925350 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6654e22-693d-4ffb-9fa9-56a1d7133c35-config" (OuterVolumeSpecName: "config") pod "e6654e22-693d-4ffb-9fa9-56a1d7133c35" (UID: "e6654e22-693d-4ffb-9fa9-56a1d7133c35"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.932005 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6654e22-693d-4ffb-9fa9-56a1d7133c35-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e6654e22-693d-4ffb-9fa9-56a1d7133c35" (UID: "e6654e22-693d-4ffb-9fa9-56a1d7133c35"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.934703 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6654e22-693d-4ffb-9fa9-56a1d7133c35-kube-api-access-7rzzx" (OuterVolumeSpecName: "kube-api-access-7rzzx") pod "e6654e22-693d-4ffb-9fa9-56a1d7133c35" (UID: "e6654e22-693d-4ffb-9fa9-56a1d7133c35"). InnerVolumeSpecName "kube-api-access-7rzzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.945108 4520 generic.go:334] "Generic (PLEG): container finished" podID="1eac717c-d126-4e2d-8bae-ef99f07ac430" containerID="ddad5cbebd15c88b2ed85302473c797328f58f4a8194c00965529132b1e1e92f" exitCode=0 Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.945315 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cf4668589-ntkwf" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.945255 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cf4668589-ntkwf" event={"ID":"1eac717c-d126-4e2d-8bae-ef99f07ac430","Type":"ContainerDied","Data":"ddad5cbebd15c88b2ed85302473c797328f58f4a8194c00965529132b1e1e92f"} Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.945505 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cf4668589-ntkwf" event={"ID":"1eac717c-d126-4e2d-8bae-ef99f07ac430","Type":"ContainerDied","Data":"71f1355193ea0c0cb60aff1d7df19ead1f4a3ff2b195b62556d828b1ac3befa7"} Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.945582 4520 scope.go:117] "RemoveContainer" containerID="ddad5cbebd15c88b2ed85302473c797328f58f4a8194c00965529132b1e1e92f" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.949100 4520 generic.go:334] "Generic (PLEG): container finished" podID="e6654e22-693d-4ffb-9fa9-56a1d7133c35" containerID="1596ebb3cc887d60a4ab5d787b74ddef4f1cc244cf86e9b489e58bdb706d2fc0" exitCode=0 Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.949163 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5696d4555f-j6m67" event={"ID":"e6654e22-693d-4ffb-9fa9-56a1d7133c35","Type":"ContainerDied","Data":"1596ebb3cc887d60a4ab5d787b74ddef4f1cc244cf86e9b489e58bdb706d2fc0"} Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.949182 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5696d4555f-j6m67" event={"ID":"e6654e22-693d-4ffb-9fa9-56a1d7133c35","Type":"ContainerDied","Data":"502725a5c23dd32dbe061dc2f009993c454fc4cfa9be60a3fae83d07152f4324"} Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.949253 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5696d4555f-j6m67" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.953607 4520 generic.go:334] "Generic (PLEG): container finished" podID="579f5440-0003-4f0c-b5b1-cf8b477cf9e4" containerID="812681dca5f84df0f284baa586073fff28fd62a96c1425ef2e6162f0c932b6fc" exitCode=0 Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.955095 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x6w7h" event={"ID":"579f5440-0003-4f0c-b5b1-cf8b477cf9e4","Type":"ContainerDied","Data":"812681dca5f84df0f284baa586073fff28fd62a96c1425ef2e6162f0c932b6fc"} Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.955229 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x6w7h" event={"ID":"579f5440-0003-4f0c-b5b1-cf8b477cf9e4","Type":"ContainerStarted","Data":"fe8760ca2ff1d9c4d09e49c553b1d19b1b7a546b743a0c2ebd6c5fb3e854eeb6"} Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.978776 4520 scope.go:117] "RemoveContainer" containerID="ddad5cbebd15c88b2ed85302473c797328f58f4a8194c00965529132b1e1e92f" Jan 30 06:48:34 crc kubenswrapper[4520]: E0130 06:48:34.979677 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddad5cbebd15c88b2ed85302473c797328f58f4a8194c00965529132b1e1e92f\": container with ID starting with ddad5cbebd15c88b2ed85302473c797328f58f4a8194c00965529132b1e1e92f not found: ID does not exist" containerID="ddad5cbebd15c88b2ed85302473c797328f58f4a8194c00965529132b1e1e92f" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.979749 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddad5cbebd15c88b2ed85302473c797328f58f4a8194c00965529132b1e1e92f"} err="failed to get container status \"ddad5cbebd15c88b2ed85302473c797328f58f4a8194c00965529132b1e1e92f\": rpc error: code = NotFound desc = could not find container \"ddad5cbebd15c88b2ed85302473c797328f58f4a8194c00965529132b1e1e92f\": container with ID starting with ddad5cbebd15c88b2ed85302473c797328f58f4a8194c00965529132b1e1e92f not found: ID does not exist" Jan 30 06:48:34 crc kubenswrapper[4520]: I0130 06:48:34.979786 4520 scope.go:117] "RemoveContainer" containerID="1596ebb3cc887d60a4ab5d787b74ddef4f1cc244cf86e9b489e58bdb706d2fc0" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.003219 4520 scope.go:117] "RemoveContainer" containerID="1596ebb3cc887d60a4ab5d787b74ddef4f1cc244cf86e9b489e58bdb706d2fc0" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.005581 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cf4668589-ntkwf"] Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.007074 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cf4668589-ntkwf"] Jan 30 06:48:35 crc kubenswrapper[4520]: E0130 06:48:35.007166 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1596ebb3cc887d60a4ab5d787b74ddef4f1cc244cf86e9b489e58bdb706d2fc0\": container with ID starting with 1596ebb3cc887d60a4ab5d787b74ddef4f1cc244cf86e9b489e58bdb706d2fc0 not found: ID does not exist" containerID="1596ebb3cc887d60a4ab5d787b74ddef4f1cc244cf86e9b489e58bdb706d2fc0" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.007208 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1596ebb3cc887d60a4ab5d787b74ddef4f1cc244cf86e9b489e58bdb706d2fc0"} err="failed to get container status \"1596ebb3cc887d60a4ab5d787b74ddef4f1cc244cf86e9b489e58bdb706d2fc0\": rpc error: code = NotFound desc = could not find container \"1596ebb3cc887d60a4ab5d787b74ddef4f1cc244cf86e9b489e58bdb706d2fc0\": container with ID starting with 1596ebb3cc887d60a4ab5d787b74ddef4f1cc244cf86e9b489e58bdb706d2fc0 not found: ID does not exist" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.009437 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5696d4555f-j6m67"] Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.011450 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5696d4555f-j6m67"] Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.026600 4520 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6654e22-693d-4ffb-9fa9-56a1d7133c35-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.026660 4520 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6654e22-693d-4ffb-9fa9-56a1d7133c35-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.026673 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6654e22-693d-4ffb-9fa9-56a1d7133c35-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.026686 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rzzx\" (UniqueName: \"kubernetes.io/projected/e6654e22-693d-4ffb-9fa9-56a1d7133c35-kube-api-access-7rzzx\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.026695 4520 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6654e22-693d-4ffb-9fa9-56a1d7133c35-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.086993 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kln7p"] Jan 30 06:48:35 crc kubenswrapper[4520]: W0130 06:48:35.091565 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6012817b_5b3e_49bd_9dfd_27886e0136fe.slice/crio-f369cdf6ef0f6c5f3c38dbfbef4692d3de33ec98ef162090a7c100c08a0b6e9f WatchSource:0}: Error finding container f369cdf6ef0f6c5f3c38dbfbef4692d3de33ec98ef162090a7c100c08a0b6e9f: Status 404 returned error can't find the container with id f369cdf6ef0f6c5f3c38dbfbef4692d3de33ec98ef162090a7c100c08a0b6e9f Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.466590 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-864b9b6b9d-wjphz"] Jan 30 06:48:35 crc kubenswrapper[4520]: E0130 06:48:35.466968 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eac717c-d126-4e2d-8bae-ef99f07ac430" containerName="route-controller-manager" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.466995 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eac717c-d126-4e2d-8bae-ef99f07ac430" containerName="route-controller-manager" Jan 30 06:48:35 crc kubenswrapper[4520]: E0130 06:48:35.467010 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6654e22-693d-4ffb-9fa9-56a1d7133c35" containerName="controller-manager" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.467018 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6654e22-693d-4ffb-9fa9-56a1d7133c35" containerName="controller-manager" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.467137 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eac717c-d126-4e2d-8bae-ef99f07ac430" containerName="route-controller-manager" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.467149 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6654e22-693d-4ffb-9fa9-56a1d7133c35" containerName="controller-manager" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.467853 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-864b9b6b9d-wjphz" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.468508 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f8cd9cf7d-bdgpj"] Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.469707 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f8cd9cf7d-bdgpj" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.472471 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.472836 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.473043 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.473162 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.473256 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.473203 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.473569 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.474634 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.476295 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.476729 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.477076 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.477294 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.488051 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.489176 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f8cd9cf7d-bdgpj"] Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.492260 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-864b9b6b9d-wjphz"] Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.535993 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29f74dba-e0dc-4507-9bb9-97664a2839c9-serving-cert\") pod \"route-controller-manager-864b9b6b9d-wjphz\" (UID: \"29f74dba-e0dc-4507-9bb9-97664a2839c9\") " pod="openshift-route-controller-manager/route-controller-manager-864b9b6b9d-wjphz" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.536036 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29f74dba-e0dc-4507-9bb9-97664a2839c9-client-ca\") pod \"route-controller-manager-864b9b6b9d-wjphz\" (UID: \"29f74dba-e0dc-4507-9bb9-97664a2839c9\") " pod="openshift-route-controller-manager/route-controller-manager-864b9b6b9d-wjphz" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.536062 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7096caef-a90c-4c67-bb72-972e1415d8c2-config\") pod \"controller-manager-7f8cd9cf7d-bdgpj\" (UID: \"7096caef-a90c-4c67-bb72-972e1415d8c2\") " pod="openshift-controller-manager/controller-manager-7f8cd9cf7d-bdgpj" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.536082 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7096caef-a90c-4c67-bb72-972e1415d8c2-proxy-ca-bundles\") pod \"controller-manager-7f8cd9cf7d-bdgpj\" (UID: \"7096caef-a90c-4c67-bb72-972e1415d8c2\") " pod="openshift-controller-manager/controller-manager-7f8cd9cf7d-bdgpj" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.536102 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpcwb\" (UniqueName: \"kubernetes.io/projected/29f74dba-e0dc-4507-9bb9-97664a2839c9-kube-api-access-dpcwb\") pod \"route-controller-manager-864b9b6b9d-wjphz\" (UID: \"29f74dba-e0dc-4507-9bb9-97664a2839c9\") " pod="openshift-route-controller-manager/route-controller-manager-864b9b6b9d-wjphz" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.536151 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7096caef-a90c-4c67-bb72-972e1415d8c2-client-ca\") pod \"controller-manager-7f8cd9cf7d-bdgpj\" (UID: \"7096caef-a90c-4c67-bb72-972e1415d8c2\") " pod="openshift-controller-manager/controller-manager-7f8cd9cf7d-bdgpj" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.536178 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29f74dba-e0dc-4507-9bb9-97664a2839c9-config\") pod \"route-controller-manager-864b9b6b9d-wjphz\" (UID: \"29f74dba-e0dc-4507-9bb9-97664a2839c9\") " pod="openshift-route-controller-manager/route-controller-manager-864b9b6b9d-wjphz" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.536198 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc64d\" (UniqueName: \"kubernetes.io/projected/7096caef-a90c-4c67-bb72-972e1415d8c2-kube-api-access-qc64d\") pod \"controller-manager-7f8cd9cf7d-bdgpj\" (UID: \"7096caef-a90c-4c67-bb72-972e1415d8c2\") " pod="openshift-controller-manager/controller-manager-7f8cd9cf7d-bdgpj" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.536220 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7096caef-a90c-4c67-bb72-972e1415d8c2-serving-cert\") pod \"controller-manager-7f8cd9cf7d-bdgpj\" (UID: \"7096caef-a90c-4c67-bb72-972e1415d8c2\") " pod="openshift-controller-manager/controller-manager-7f8cd9cf7d-bdgpj" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.636844 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7096caef-a90c-4c67-bb72-972e1415d8c2-client-ca\") pod \"controller-manager-7f8cd9cf7d-bdgpj\" (UID: \"7096caef-a90c-4c67-bb72-972e1415d8c2\") " pod="openshift-controller-manager/controller-manager-7f8cd9cf7d-bdgpj" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.636887 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29f74dba-e0dc-4507-9bb9-97664a2839c9-config\") pod \"route-controller-manager-864b9b6b9d-wjphz\" (UID: \"29f74dba-e0dc-4507-9bb9-97664a2839c9\") " pod="openshift-route-controller-manager/route-controller-manager-864b9b6b9d-wjphz" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.636913 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc64d\" (UniqueName: \"kubernetes.io/projected/7096caef-a90c-4c67-bb72-972e1415d8c2-kube-api-access-qc64d\") pod \"controller-manager-7f8cd9cf7d-bdgpj\" (UID: \"7096caef-a90c-4c67-bb72-972e1415d8c2\") " pod="openshift-controller-manager/controller-manager-7f8cd9cf7d-bdgpj" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.636937 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7096caef-a90c-4c67-bb72-972e1415d8c2-serving-cert\") pod \"controller-manager-7f8cd9cf7d-bdgpj\" (UID: \"7096caef-a90c-4c67-bb72-972e1415d8c2\") " pod="openshift-controller-manager/controller-manager-7f8cd9cf7d-bdgpj" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.636958 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29f74dba-e0dc-4507-9bb9-97664a2839c9-serving-cert\") pod \"route-controller-manager-864b9b6b9d-wjphz\" (UID: \"29f74dba-e0dc-4507-9bb9-97664a2839c9\") " pod="openshift-route-controller-manager/route-controller-manager-864b9b6b9d-wjphz" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.636986 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29f74dba-e0dc-4507-9bb9-97664a2839c9-client-ca\") pod \"route-controller-manager-864b9b6b9d-wjphz\" (UID: \"29f74dba-e0dc-4507-9bb9-97664a2839c9\") " pod="openshift-route-controller-manager/route-controller-manager-864b9b6b9d-wjphz" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.637006 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7096caef-a90c-4c67-bb72-972e1415d8c2-config\") pod \"controller-manager-7f8cd9cf7d-bdgpj\" (UID: \"7096caef-a90c-4c67-bb72-972e1415d8c2\") " pod="openshift-controller-manager/controller-manager-7f8cd9cf7d-bdgpj" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.637022 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7096caef-a90c-4c67-bb72-972e1415d8c2-proxy-ca-bundles\") pod \"controller-manager-7f8cd9cf7d-bdgpj\" (UID: \"7096caef-a90c-4c67-bb72-972e1415d8c2\") " pod="openshift-controller-manager/controller-manager-7f8cd9cf7d-bdgpj" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.637040 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpcwb\" (UniqueName: \"kubernetes.io/projected/29f74dba-e0dc-4507-9bb9-97664a2839c9-kube-api-access-dpcwb\") pod \"route-controller-manager-864b9b6b9d-wjphz\" (UID: \"29f74dba-e0dc-4507-9bb9-97664a2839c9\") " pod="openshift-route-controller-manager/route-controller-manager-864b9b6b9d-wjphz" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.638082 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29f74dba-e0dc-4507-9bb9-97664a2839c9-client-ca\") pod \"route-controller-manager-864b9b6b9d-wjphz\" (UID: \"29f74dba-e0dc-4507-9bb9-97664a2839c9\") " pod="openshift-route-controller-manager/route-controller-manager-864b9b6b9d-wjphz" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.638298 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7096caef-a90c-4c67-bb72-972e1415d8c2-client-ca\") pod \"controller-manager-7f8cd9cf7d-bdgpj\" (UID: \"7096caef-a90c-4c67-bb72-972e1415d8c2\") " pod="openshift-controller-manager/controller-manager-7f8cd9cf7d-bdgpj" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.639208 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7096caef-a90c-4c67-bb72-972e1415d8c2-config\") pod \"controller-manager-7f8cd9cf7d-bdgpj\" (UID: \"7096caef-a90c-4c67-bb72-972e1415d8c2\") " pod="openshift-controller-manager/controller-manager-7f8cd9cf7d-bdgpj" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.642283 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7096caef-a90c-4c67-bb72-972e1415d8c2-serving-cert\") pod \"controller-manager-7f8cd9cf7d-bdgpj\" (UID: \"7096caef-a90c-4c67-bb72-972e1415d8c2\") " pod="openshift-controller-manager/controller-manager-7f8cd9cf7d-bdgpj" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.642542 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29f74dba-e0dc-4507-9bb9-97664a2839c9-serving-cert\") pod \"route-controller-manager-864b9b6b9d-wjphz\" (UID: \"29f74dba-e0dc-4507-9bb9-97664a2839c9\") " pod="openshift-route-controller-manager/route-controller-manager-864b9b6b9d-wjphz" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.643357 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29f74dba-e0dc-4507-9bb9-97664a2839c9-config\") pod \"route-controller-manager-864b9b6b9d-wjphz\" (UID: \"29f74dba-e0dc-4507-9bb9-97664a2839c9\") " pod="openshift-route-controller-manager/route-controller-manager-864b9b6b9d-wjphz" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.649667 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7096caef-a90c-4c67-bb72-972e1415d8c2-proxy-ca-bundles\") pod \"controller-manager-7f8cd9cf7d-bdgpj\" (UID: \"7096caef-a90c-4c67-bb72-972e1415d8c2\") " pod="openshift-controller-manager/controller-manager-7f8cd9cf7d-bdgpj" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.651637 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpcwb\" (UniqueName: \"kubernetes.io/projected/29f74dba-e0dc-4507-9bb9-97664a2839c9-kube-api-access-dpcwb\") pod \"route-controller-manager-864b9b6b9d-wjphz\" (UID: \"29f74dba-e0dc-4507-9bb9-97664a2839c9\") " pod="openshift-route-controller-manager/route-controller-manager-864b9b6b9d-wjphz" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.654011 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc64d\" (UniqueName: \"kubernetes.io/projected/7096caef-a90c-4c67-bb72-972e1415d8c2-kube-api-access-qc64d\") pod \"controller-manager-7f8cd9cf7d-bdgpj\" (UID: \"7096caef-a90c-4c67-bb72-972e1415d8c2\") " pod="openshift-controller-manager/controller-manager-7f8cd9cf7d-bdgpj" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.784604 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-864b9b6b9d-wjphz" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.794696 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f8cd9cf7d-bdgpj" Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.964335 4520 generic.go:334] "Generic (PLEG): container finished" podID="579f5440-0003-4f0c-b5b1-cf8b477cf9e4" containerID="639092ca40dc1feed57ff00f92d74bc6968b184d847ffc85b48a0974392db7a4" exitCode=0 Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.964636 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x6w7h" event={"ID":"579f5440-0003-4f0c-b5b1-cf8b477cf9e4","Type":"ContainerDied","Data":"639092ca40dc1feed57ff00f92d74bc6968b184d847ffc85b48a0974392db7a4"} Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.976323 4520 generic.go:334] "Generic (PLEG): container finished" podID="6012817b-5b3e-49bd-9dfd-27886e0136fe" containerID="b90cebd2ad524f6132579d7f189e1c13f317777db7608ecd36993efd145d79be" exitCode=0 Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.976381 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kln7p" event={"ID":"6012817b-5b3e-49bd-9dfd-27886e0136fe","Type":"ContainerDied","Data":"b90cebd2ad524f6132579d7f189e1c13f317777db7608ecd36993efd145d79be"} Jan 30 06:48:35 crc kubenswrapper[4520]: I0130 06:48:35.976412 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kln7p" event={"ID":"6012817b-5b3e-49bd-9dfd-27886e0136fe","Type":"ContainerStarted","Data":"f369cdf6ef0f6c5f3c38dbfbef4692d3de33ec98ef162090a7c100c08a0b6e9f"} Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.036450 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f8cd9cf7d-bdgpj"] Jan 30 06:48:36 crc kubenswrapper[4520]: W0130 06:48:36.043728 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7096caef_a90c_4c67_bb72_972e1415d8c2.slice/crio-2b41979183a63b34ef91dda4db0c17d56e2e9f8984ddf989d1dfbca94b48f34a WatchSource:0}: Error finding container 2b41979183a63b34ef91dda4db0c17d56e2e9f8984ddf989d1dfbca94b48f34a: Status 404 returned error can't find the container with id 2b41979183a63b34ef91dda4db0c17d56e2e9f8984ddf989d1dfbca94b48f34a Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.120292 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s8qcj"] Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.125204 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s8qcj" Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.129736 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.132244 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s8qcj"] Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.146957 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsj55\" (UniqueName: \"kubernetes.io/projected/f84a02f3-3a3b-433a-bea0-98d4e37744da-kube-api-access-lsj55\") pod \"redhat-operators-s8qcj\" (UID: \"f84a02f3-3a3b-433a-bea0-98d4e37744da\") " pod="openshift-marketplace/redhat-operators-s8qcj" Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.146986 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f84a02f3-3a3b-433a-bea0-98d4e37744da-catalog-content\") pod \"redhat-operators-s8qcj\" (UID: \"f84a02f3-3a3b-433a-bea0-98d4e37744da\") " pod="openshift-marketplace/redhat-operators-s8qcj" Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.147015 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f84a02f3-3a3b-433a-bea0-98d4e37744da-utilities\") pod \"redhat-operators-s8qcj\" (UID: \"f84a02f3-3a3b-433a-bea0-98d4e37744da\") " pod="openshift-marketplace/redhat-operators-s8qcj" Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.247856 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsj55\" (UniqueName: \"kubernetes.io/projected/f84a02f3-3a3b-433a-bea0-98d4e37744da-kube-api-access-lsj55\") pod \"redhat-operators-s8qcj\" (UID: \"f84a02f3-3a3b-433a-bea0-98d4e37744da\") " pod="openshift-marketplace/redhat-operators-s8qcj" Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.247911 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f84a02f3-3a3b-433a-bea0-98d4e37744da-catalog-content\") pod \"redhat-operators-s8qcj\" (UID: \"f84a02f3-3a3b-433a-bea0-98d4e37744da\") " pod="openshift-marketplace/redhat-operators-s8qcj" Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.247948 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f84a02f3-3a3b-433a-bea0-98d4e37744da-utilities\") pod \"redhat-operators-s8qcj\" (UID: \"f84a02f3-3a3b-433a-bea0-98d4e37744da\") " pod="openshift-marketplace/redhat-operators-s8qcj" Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.248552 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f84a02f3-3a3b-433a-bea0-98d4e37744da-utilities\") pod \"redhat-operators-s8qcj\" (UID: \"f84a02f3-3a3b-433a-bea0-98d4e37744da\") " pod="openshift-marketplace/redhat-operators-s8qcj" Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.249037 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f84a02f3-3a3b-433a-bea0-98d4e37744da-catalog-content\") pod \"redhat-operators-s8qcj\" (UID: \"f84a02f3-3a3b-433a-bea0-98d4e37744da\") " pod="openshift-marketplace/redhat-operators-s8qcj" Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.263824 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsj55\" (UniqueName: \"kubernetes.io/projected/f84a02f3-3a3b-433a-bea0-98d4e37744da-kube-api-access-lsj55\") pod \"redhat-operators-s8qcj\" (UID: \"f84a02f3-3a3b-433a-bea0-98d4e37744da\") " pod="openshift-marketplace/redhat-operators-s8qcj" Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.289059 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-864b9b6b9d-wjphz"] Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.438163 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s8qcj" Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.693430 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1eac717c-d126-4e2d-8bae-ef99f07ac430" path="/var/lib/kubelet/pods/1eac717c-d126-4e2d-8bae-ef99f07ac430/volumes" Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.694142 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6654e22-693d-4ffb-9fa9-56a1d7133c35" path="/var/lib/kubelet/pods/e6654e22-693d-4ffb-9fa9-56a1d7133c35/volumes" Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.724686 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7gxfs"] Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.725650 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7gxfs" Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.728061 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.738601 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7gxfs"] Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.752953 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea00fc18-fa83-4a0b-afbb-1faba49e4385-catalog-content\") pod \"community-operators-7gxfs\" (UID: \"ea00fc18-fa83-4a0b-afbb-1faba49e4385\") " pod="openshift-marketplace/community-operators-7gxfs" Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.753012 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72dvr\" (UniqueName: \"kubernetes.io/projected/ea00fc18-fa83-4a0b-afbb-1faba49e4385-kube-api-access-72dvr\") pod \"community-operators-7gxfs\" (UID: \"ea00fc18-fa83-4a0b-afbb-1faba49e4385\") " pod="openshift-marketplace/community-operators-7gxfs" Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.753044 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea00fc18-fa83-4a0b-afbb-1faba49e4385-utilities\") pod \"community-operators-7gxfs\" (UID: \"ea00fc18-fa83-4a0b-afbb-1faba49e4385\") " pod="openshift-marketplace/community-operators-7gxfs" Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.842801 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s8qcj"] Jan 30 06:48:36 crc kubenswrapper[4520]: W0130 06:48:36.845254 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf84a02f3_3a3b_433a_bea0_98d4e37744da.slice/crio-4a7511c37f7338c81687556fe992aad8931cc8035614d0bb5ab75ddd54724f1a WatchSource:0}: Error finding container 4a7511c37f7338c81687556fe992aad8931cc8035614d0bb5ab75ddd54724f1a: Status 404 returned error can't find the container with id 4a7511c37f7338c81687556fe992aad8931cc8035614d0bb5ab75ddd54724f1a Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.853953 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72dvr\" (UniqueName: \"kubernetes.io/projected/ea00fc18-fa83-4a0b-afbb-1faba49e4385-kube-api-access-72dvr\") pod \"community-operators-7gxfs\" (UID: \"ea00fc18-fa83-4a0b-afbb-1faba49e4385\") " pod="openshift-marketplace/community-operators-7gxfs" Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.854029 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea00fc18-fa83-4a0b-afbb-1faba49e4385-utilities\") pod \"community-operators-7gxfs\" (UID: \"ea00fc18-fa83-4a0b-afbb-1faba49e4385\") " pod="openshift-marketplace/community-operators-7gxfs" Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.854093 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea00fc18-fa83-4a0b-afbb-1faba49e4385-catalog-content\") pod \"community-operators-7gxfs\" (UID: \"ea00fc18-fa83-4a0b-afbb-1faba49e4385\") " pod="openshift-marketplace/community-operators-7gxfs" Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.854545 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea00fc18-fa83-4a0b-afbb-1faba49e4385-catalog-content\") pod \"community-operators-7gxfs\" (UID: \"ea00fc18-fa83-4a0b-afbb-1faba49e4385\") " pod="openshift-marketplace/community-operators-7gxfs" Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.854756 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea00fc18-fa83-4a0b-afbb-1faba49e4385-utilities\") pod \"community-operators-7gxfs\" (UID: \"ea00fc18-fa83-4a0b-afbb-1faba49e4385\") " pod="openshift-marketplace/community-operators-7gxfs" Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.871269 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72dvr\" (UniqueName: \"kubernetes.io/projected/ea00fc18-fa83-4a0b-afbb-1faba49e4385-kube-api-access-72dvr\") pod \"community-operators-7gxfs\" (UID: \"ea00fc18-fa83-4a0b-afbb-1faba49e4385\") " pod="openshift-marketplace/community-operators-7gxfs" Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.986782 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x6w7h" event={"ID":"579f5440-0003-4f0c-b5b1-cf8b477cf9e4","Type":"ContainerStarted","Data":"17121faa3e0fcabfb46834f042043140a283518cac8889d8023606cbccdb8c09"} Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.989870 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-864b9b6b9d-wjphz" event={"ID":"29f74dba-e0dc-4507-9bb9-97664a2839c9","Type":"ContainerStarted","Data":"5b235c69892db9cf627451db8a66076d5b83d0a68e734d8cb086f5cebc831a1b"} Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.989918 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-864b9b6b9d-wjphz" event={"ID":"29f74dba-e0dc-4507-9bb9-97664a2839c9","Type":"ContainerStarted","Data":"9480a68b475bca903b4ab86fe84aea195ae0488385179bab3814aec7f1b7a0a3"} Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.990153 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-864b9b6b9d-wjphz" Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.992778 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f8cd9cf7d-bdgpj" event={"ID":"7096caef-a90c-4c67-bb72-972e1415d8c2","Type":"ContainerStarted","Data":"f8924b366c2df6421a1873c2612a4d9b6dbcbebc41bd51eefc1f4475f59fc597"} Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.992815 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f8cd9cf7d-bdgpj" event={"ID":"7096caef-a90c-4c67-bb72-972e1415d8c2","Type":"ContainerStarted","Data":"2b41979183a63b34ef91dda4db0c17d56e2e9f8984ddf989d1dfbca94b48f34a"} Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.993661 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7f8cd9cf7d-bdgpj" Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.996845 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8qcj" event={"ID":"f84a02f3-3a3b-433a-bea0-98d4e37744da","Type":"ContainerStarted","Data":"5548239701c2b649c54888408ebf712f5e2b3195b763f983f25ec1c500d71f53"} Jan 30 06:48:36 crc kubenswrapper[4520]: I0130 06:48:36.996873 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8qcj" event={"ID":"f84a02f3-3a3b-433a-bea0-98d4e37744da","Type":"ContainerStarted","Data":"4a7511c37f7338c81687556fe992aad8931cc8035614d0bb5ab75ddd54724f1a"} Jan 30 06:48:37 crc kubenswrapper[4520]: I0130 06:48:37.000210 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7f8cd9cf7d-bdgpj" Jan 30 06:48:37 crc kubenswrapper[4520]: I0130 06:48:37.005345 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x6w7h" podStartSLOduration=2.528931873 podStartE2EDuration="4.005329911s" podCreationTimestamp="2026-01-30 06:48:33 +0000 UTC" firstStartedPulling="2026-01-30 06:48:34.957368152 +0000 UTC m=+228.585720333" lastFinishedPulling="2026-01-30 06:48:36.43376619 +0000 UTC m=+230.062118371" observedRunningTime="2026-01-30 06:48:37.00309832 +0000 UTC m=+230.631450502" watchObservedRunningTime="2026-01-30 06:48:37.005329911 +0000 UTC m=+230.633682092" Jan 30 06:48:37 crc kubenswrapper[4520]: I0130 06:48:37.037016 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-864b9b6b9d-wjphz" Jan 30 06:48:37 crc kubenswrapper[4520]: I0130 06:48:37.042386 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7gxfs" Jan 30 06:48:37 crc kubenswrapper[4520]: I0130 06:48:37.047716 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-864b9b6b9d-wjphz" podStartSLOduration=3.047694939 podStartE2EDuration="3.047694939s" podCreationTimestamp="2026-01-30 06:48:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:48:37.045202888 +0000 UTC m=+230.673555069" watchObservedRunningTime="2026-01-30 06:48:37.047694939 +0000 UTC m=+230.676047120" Jan 30 06:48:37 crc kubenswrapper[4520]: I0130 06:48:37.079262 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7f8cd9cf7d-bdgpj" podStartSLOduration=3.07924551 podStartE2EDuration="3.07924551s" podCreationTimestamp="2026-01-30 06:48:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:48:37.077773298 +0000 UTC m=+230.706125480" watchObservedRunningTime="2026-01-30 06:48:37.07924551 +0000 UTC m=+230.707597691" Jan 30 06:48:37 crc kubenswrapper[4520]: I0130 06:48:37.307138 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7gxfs"] Jan 30 06:48:37 crc kubenswrapper[4520]: W0130 06:48:37.315250 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea00fc18_fa83_4a0b_afbb_1faba49e4385.slice/crio-7e98a04a05a9fddf121d22e64b9214acbf258effa6fbb00b7595de0267ba3087 WatchSource:0}: Error finding container 7e98a04a05a9fddf121d22e64b9214acbf258effa6fbb00b7595de0267ba3087: Status 404 returned error can't find the container with id 7e98a04a05a9fddf121d22e64b9214acbf258effa6fbb00b7595de0267ba3087 Jan 30 06:48:38 crc kubenswrapper[4520]: I0130 06:48:38.004359 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8qcj" event={"ID":"f84a02f3-3a3b-433a-bea0-98d4e37744da","Type":"ContainerDied","Data":"5548239701c2b649c54888408ebf712f5e2b3195b763f983f25ec1c500d71f53"} Jan 30 06:48:38 crc kubenswrapper[4520]: I0130 06:48:38.004306 4520 generic.go:334] "Generic (PLEG): container finished" podID="f84a02f3-3a3b-433a-bea0-98d4e37744da" containerID="5548239701c2b649c54888408ebf712f5e2b3195b763f983f25ec1c500d71f53" exitCode=0 Jan 30 06:48:38 crc kubenswrapper[4520]: I0130 06:48:38.004787 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8qcj" event={"ID":"f84a02f3-3a3b-433a-bea0-98d4e37744da","Type":"ContainerStarted","Data":"d0062d0d09a6e8e794b79c2bd5df78317958ed7cf0963a62acf6fb7931a086ce"} Jan 30 06:48:38 crc kubenswrapper[4520]: I0130 06:48:38.005906 4520 generic.go:334] "Generic (PLEG): container finished" podID="ea00fc18-fa83-4a0b-afbb-1faba49e4385" containerID="b72909095959aafc78799849a88d3d9f22a1bb2fc96eddc35a351c3961a5ab8f" exitCode=0 Jan 30 06:48:38 crc kubenswrapper[4520]: I0130 06:48:38.005975 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gxfs" event={"ID":"ea00fc18-fa83-4a0b-afbb-1faba49e4385","Type":"ContainerDied","Data":"b72909095959aafc78799849a88d3d9f22a1bb2fc96eddc35a351c3961a5ab8f"} Jan 30 06:48:38 crc kubenswrapper[4520]: I0130 06:48:38.006011 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gxfs" event={"ID":"ea00fc18-fa83-4a0b-afbb-1faba49e4385","Type":"ContainerStarted","Data":"7e98a04a05a9fddf121d22e64b9214acbf258effa6fbb00b7595de0267ba3087"} Jan 30 06:48:38 crc kubenswrapper[4520]: I0130 06:48:38.007888 4520 generic.go:334] "Generic (PLEG): container finished" podID="6012817b-5b3e-49bd-9dfd-27886e0136fe" containerID="43a3a08e23afcd4e4af83cdb4bb8f2853d33c4ccd0c6a7c3fbd645e515943e2e" exitCode=0 Jan 30 06:48:38 crc kubenswrapper[4520]: I0130 06:48:38.007978 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kln7p" event={"ID":"6012817b-5b3e-49bd-9dfd-27886e0136fe","Type":"ContainerDied","Data":"43a3a08e23afcd4e4af83cdb4bb8f2853d33c4ccd0c6a7c3fbd645e515943e2e"} Jan 30 06:48:39 crc kubenswrapper[4520]: I0130 06:48:39.016869 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kln7p" event={"ID":"6012817b-5b3e-49bd-9dfd-27886e0136fe","Type":"ContainerStarted","Data":"1b0db85e84f643537ae2a26a88dd5c9539b92c40fbf233800224ab6996d138b4"} Jan 30 06:48:39 crc kubenswrapper[4520]: I0130 06:48:39.023966 4520 generic.go:334] "Generic (PLEG): container finished" podID="f84a02f3-3a3b-433a-bea0-98d4e37744da" containerID="d0062d0d09a6e8e794b79c2bd5df78317958ed7cf0963a62acf6fb7931a086ce" exitCode=0 Jan 30 06:48:39 crc kubenswrapper[4520]: I0130 06:48:39.024162 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8qcj" event={"ID":"f84a02f3-3a3b-433a-bea0-98d4e37744da","Type":"ContainerDied","Data":"d0062d0d09a6e8e794b79c2bd5df78317958ed7cf0963a62acf6fb7931a086ce"} Jan 30 06:48:39 crc kubenswrapper[4520]: I0130 06:48:39.026577 4520 generic.go:334] "Generic (PLEG): container finished" podID="ea00fc18-fa83-4a0b-afbb-1faba49e4385" containerID="921f248f749c773b89da298b517a6d9f1b56be72c623b717ca88ef4d798bbab0" exitCode=0 Jan 30 06:48:39 crc kubenswrapper[4520]: I0130 06:48:39.028303 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gxfs" event={"ID":"ea00fc18-fa83-4a0b-afbb-1faba49e4385","Type":"ContainerDied","Data":"921f248f749c773b89da298b517a6d9f1b56be72c623b717ca88ef4d798bbab0"} Jan 30 06:48:39 crc kubenswrapper[4520]: I0130 06:48:39.036706 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kln7p" podStartSLOduration=2.515662682 podStartE2EDuration="5.036686933s" podCreationTimestamp="2026-01-30 06:48:34 +0000 UTC" firstStartedPulling="2026-01-30 06:48:35.977877076 +0000 UTC m=+229.606229247" lastFinishedPulling="2026-01-30 06:48:38.498901316 +0000 UTC m=+232.127253498" observedRunningTime="2026-01-30 06:48:39.035811686 +0000 UTC m=+232.664163867" watchObservedRunningTime="2026-01-30 06:48:39.036686933 +0000 UTC m=+232.665039114" Jan 30 06:48:40 crc kubenswrapper[4520]: I0130 06:48:40.036044 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gxfs" event={"ID":"ea00fc18-fa83-4a0b-afbb-1faba49e4385","Type":"ContainerStarted","Data":"9c797d4e315a12deb3e57c9c20a89fdc79fc4acda58b68bcfbb3d2c6905e44f5"} Jan 30 06:48:40 crc kubenswrapper[4520]: I0130 06:48:40.038792 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8qcj" event={"ID":"f84a02f3-3a3b-433a-bea0-98d4e37744da","Type":"ContainerStarted","Data":"7ab48d15110c28ac35c8fb55c6452ec0bfe03e95c58de7979c70bec7ae62c9b5"} Jan 30 06:48:40 crc kubenswrapper[4520]: I0130 06:48:40.055907 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7gxfs" podStartSLOduration=2.41769738 podStartE2EDuration="4.055887485s" podCreationTimestamp="2026-01-30 06:48:36 +0000 UTC" firstStartedPulling="2026-01-30 06:48:38.00734347 +0000 UTC m=+231.635695651" lastFinishedPulling="2026-01-30 06:48:39.645533575 +0000 UTC m=+233.273885756" observedRunningTime="2026-01-30 06:48:40.054702434 +0000 UTC m=+233.683054615" watchObservedRunningTime="2026-01-30 06:48:40.055887485 +0000 UTC m=+233.684239666" Jan 30 06:48:40 crc kubenswrapper[4520]: I0130 06:48:40.067434 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s8qcj" podStartSLOduration=1.549839404 podStartE2EDuration="4.067424512s" podCreationTimestamp="2026-01-30 06:48:36 +0000 UTC" firstStartedPulling="2026-01-30 06:48:36.999174794 +0000 UTC m=+230.627526975" lastFinishedPulling="2026-01-30 06:48:39.516759903 +0000 UTC m=+233.145112083" observedRunningTime="2026-01-30 06:48:40.067074432 +0000 UTC m=+233.695426614" watchObservedRunningTime="2026-01-30 06:48:40.067424512 +0000 UTC m=+233.695776693" Jan 30 06:48:44 crc kubenswrapper[4520]: I0130 06:48:44.040882 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x6w7h" Jan 30 06:48:44 crc kubenswrapper[4520]: I0130 06:48:44.041615 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x6w7h" Jan 30 06:48:44 crc kubenswrapper[4520]: I0130 06:48:44.081533 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x6w7h" Jan 30 06:48:44 crc kubenswrapper[4520]: I0130 06:48:44.118485 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x6w7h" Jan 30 06:48:44 crc kubenswrapper[4520]: I0130 06:48:44.636356 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kln7p" Jan 30 06:48:44 crc kubenswrapper[4520]: I0130 06:48:44.636754 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kln7p" Jan 30 06:48:44 crc kubenswrapper[4520]: I0130 06:48:44.668643 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kln7p" Jan 30 06:48:45 crc kubenswrapper[4520]: I0130 06:48:45.098896 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kln7p" Jan 30 06:48:46 crc kubenswrapper[4520]: I0130 06:48:46.438657 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s8qcj" Jan 30 06:48:46 crc kubenswrapper[4520]: I0130 06:48:46.439506 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s8qcj" Jan 30 06:48:46 crc kubenswrapper[4520]: I0130 06:48:46.475592 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s8qcj" Jan 30 06:48:47 crc kubenswrapper[4520]: I0130 06:48:47.043205 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7gxfs" Jan 30 06:48:47 crc kubenswrapper[4520]: I0130 06:48:47.043265 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7gxfs" Jan 30 06:48:47 crc kubenswrapper[4520]: I0130 06:48:47.085677 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7gxfs" Jan 30 06:48:47 crc kubenswrapper[4520]: I0130 06:48:47.140702 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s8qcj" Jan 30 06:48:47 crc kubenswrapper[4520]: I0130 06:48:47.184622 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7gxfs" Jan 30 06:48:52 crc kubenswrapper[4520]: I0130 06:48:52.885854 4520 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 06:48:52 crc kubenswrapper[4520]: I0130 06:48:52.887445 4520 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 06:48:52 crc kubenswrapper[4520]: I0130 06:48:52.887552 4520 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 06:48:52 crc kubenswrapper[4520]: I0130 06:48:52.887809 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333" gracePeriod=15 Jan 30 06:48:52 crc kubenswrapper[4520]: I0130 06:48:52.887906 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 06:48:52 crc kubenswrapper[4520]: I0130 06:48:52.888015 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370" gracePeriod=15 Jan 30 06:48:52 crc kubenswrapper[4520]: E0130 06:48:52.888048 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 06:48:52 crc kubenswrapper[4520]: I0130 06:48:52.888070 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 06:48:52 crc kubenswrapper[4520]: I0130 06:48:52.887938 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782" gracePeriod=15 Jan 30 06:48:52 crc kubenswrapper[4520]: E0130 06:48:52.888081 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 06:48:52 crc kubenswrapper[4520]: I0130 06:48:52.888121 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 06:48:52 crc kubenswrapper[4520]: I0130 06:48:52.888119 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507" gracePeriod=15 Jan 30 06:48:52 crc kubenswrapper[4520]: E0130 06:48:52.888147 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 06:48:52 crc kubenswrapper[4520]: I0130 06:48:52.888158 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 06:48:52 crc kubenswrapper[4520]: E0130 06:48:52.888167 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 06:48:52 crc kubenswrapper[4520]: I0130 06:48:52.888174 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 06:48:52 crc kubenswrapper[4520]: E0130 06:48:52.888190 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 06:48:52 crc kubenswrapper[4520]: I0130 06:48:52.888196 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 06:48:52 crc kubenswrapper[4520]: E0130 06:48:52.888205 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 06:48:52 crc kubenswrapper[4520]: I0130 06:48:52.888214 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 06:48:52 crc kubenswrapper[4520]: E0130 06:48:52.888231 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 06:48:52 crc kubenswrapper[4520]: I0130 06:48:52.888239 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 06:48:52 crc kubenswrapper[4520]: I0130 06:48:52.888237 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0" gracePeriod=15 Jan 30 06:48:52 crc kubenswrapper[4520]: I0130 06:48:52.888351 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 06:48:52 crc kubenswrapper[4520]: I0130 06:48:52.888365 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 06:48:52 crc kubenswrapper[4520]: I0130 06:48:52.888390 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 06:48:52 crc kubenswrapper[4520]: I0130 06:48:52.888397 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 06:48:52 crc kubenswrapper[4520]: I0130 06:48:52.888407 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 06:48:52 crc kubenswrapper[4520]: I0130 06:48:52.888672 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 06:48:52 crc kubenswrapper[4520]: I0130 06:48:52.896496 4520 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 30 06:48:52 crc kubenswrapper[4520]: E0130 06:48:52.944945 4520 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.25.87:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 06:48:53 crc kubenswrapper[4520]: I0130 06:48:53.074760 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 06:48:53 crc kubenswrapper[4520]: I0130 06:48:53.074803 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 06:48:53 crc kubenswrapper[4520]: I0130 06:48:53.074849 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:48:53 crc kubenswrapper[4520]: I0130 06:48:53.074870 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:48:53 crc kubenswrapper[4520]: I0130 06:48:53.074977 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:48:53 crc kubenswrapper[4520]: I0130 06:48:53.075033 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 06:48:53 crc kubenswrapper[4520]: I0130 06:48:53.075066 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 06:48:53 crc kubenswrapper[4520]: I0130 06:48:53.075117 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 06:48:53 crc kubenswrapper[4520]: I0130 06:48:53.111607 4520 generic.go:334] "Generic (PLEG): container finished" podID="45d3e526-f114-4fc9-8b7c-a77ec3ae6a95" containerID="91ce8e86acd8c4fd243cdedc6650f216381ae501067b9765d08b50689366b63e" exitCode=0 Jan 30 06:48:53 crc kubenswrapper[4520]: I0130 06:48:53.111684 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"45d3e526-f114-4fc9-8b7c-a77ec3ae6a95","Type":"ContainerDied","Data":"91ce8e86acd8c4fd243cdedc6650f216381ae501067b9765d08b50689366b63e"} Jan 30 06:48:53 crc kubenswrapper[4520]: I0130 06:48:53.112305 4520 status_manager.go:851] "Failed to get status for pod" podUID="45d3e526-f114-4fc9-8b7c-a77ec3ae6a95" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.87:6443: connect: connection refused" Jan 30 06:48:53 crc kubenswrapper[4520]: I0130 06:48:53.113850 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 06:48:53 crc kubenswrapper[4520]: I0130 06:48:53.115080 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 06:48:53 crc kubenswrapper[4520]: I0130 06:48:53.115750 4520 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782" exitCode=0 Jan 30 06:48:53 crc kubenswrapper[4520]: I0130 06:48:53.115771 4520 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370" exitCode=0 Jan 30 06:48:53 crc kubenswrapper[4520]: I0130 06:48:53.115781 4520 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0" exitCode=0 Jan 30 06:48:53 crc kubenswrapper[4520]: I0130 06:48:53.115788 4520 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507" exitCode=2 Jan 30 06:48:53 crc kubenswrapper[4520]: I0130 06:48:53.115826 4520 scope.go:117] "RemoveContainer" containerID="cf8f619733bbfb75a3e2e7ed009e8dd0e563f4b07435c272a21c6a2ea6903e89" Jan 30 06:48:53 crc kubenswrapper[4520]: I0130 06:48:53.175949 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 06:48:53 crc kubenswrapper[4520]: I0130 06:48:53.175994 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 06:48:53 crc kubenswrapper[4520]: I0130 06:48:53.176017 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 06:48:53 crc kubenswrapper[4520]: I0130 06:48:53.176055 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:48:53 crc kubenswrapper[4520]: I0130 06:48:53.176075 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:48:53 crc kubenswrapper[4520]: I0130 06:48:53.176107 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 06:48:53 crc kubenswrapper[4520]: I0130 06:48:53.176128 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:48:53 crc kubenswrapper[4520]: I0130 06:48:53.176137 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 06:48:53 crc kubenswrapper[4520]: I0130 06:48:53.176183 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:48:53 crc kubenswrapper[4520]: I0130 06:48:53.176210 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 06:48:53 crc kubenswrapper[4520]: I0130 06:48:53.176205 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:48:53 crc kubenswrapper[4520]: I0130 06:48:53.176158 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 06:48:53 crc kubenswrapper[4520]: I0130 06:48:53.176151 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 06:48:53 crc kubenswrapper[4520]: I0130 06:48:53.176243 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:48:53 crc kubenswrapper[4520]: I0130 06:48:53.176333 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 06:48:53 crc kubenswrapper[4520]: I0130 06:48:53.176388 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 06:48:53 crc kubenswrapper[4520]: I0130 06:48:53.246237 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 06:48:53 crc kubenswrapper[4520]: E0130 06:48:53.277648 4520 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.25.87:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f6f767d2dfc5e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 06:48:53.276884062 +0000 UTC m=+246.905236243,LastTimestamp:2026-01-30 06:48:53.276884062 +0000 UTC m=+246.905236243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 06:48:54 crc kubenswrapper[4520]: I0130 06:48:54.126210 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 06:48:54 crc kubenswrapper[4520]: I0130 06:48:54.129050 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"60bf6a44bfcf75b17c0231f4ab4d0f6b981dbe9533aa7873bea762108394eaec"} Jan 30 06:48:54 crc kubenswrapper[4520]: I0130 06:48:54.129129 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"5de0a9df6fbbafbb7fce61b857010f7dfb4615492c1aa9dfd21d8e79549f03d1"} Jan 30 06:48:54 crc kubenswrapper[4520]: I0130 06:48:54.129952 4520 status_manager.go:851] "Failed to get status for pod" podUID="45d3e526-f114-4fc9-8b7c-a77ec3ae6a95" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.87:6443: connect: connection refused" Jan 30 06:48:54 crc kubenswrapper[4520]: E0130 06:48:54.130122 4520 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.25.87:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 06:48:54 crc kubenswrapper[4520]: I0130 06:48:54.444087 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 06:48:54 crc kubenswrapper[4520]: I0130 06:48:54.444708 4520 status_manager.go:851] "Failed to get status for pod" podUID="45d3e526-f114-4fc9-8b7c-a77ec3ae6a95" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.87:6443: connect: connection refused" Jan 30 06:48:54 crc kubenswrapper[4520]: I0130 06:48:54.493289 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/45d3e526-f114-4fc9-8b7c-a77ec3ae6a95-var-lock\") pod \"45d3e526-f114-4fc9-8b7c-a77ec3ae6a95\" (UID: \"45d3e526-f114-4fc9-8b7c-a77ec3ae6a95\") " Jan 30 06:48:54 crc kubenswrapper[4520]: I0130 06:48:54.493342 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45d3e526-f114-4fc9-8b7c-a77ec3ae6a95-kube-api-access\") pod \"45d3e526-f114-4fc9-8b7c-a77ec3ae6a95\" (UID: \"45d3e526-f114-4fc9-8b7c-a77ec3ae6a95\") " Jan 30 06:48:54 crc kubenswrapper[4520]: I0130 06:48:54.493397 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45d3e526-f114-4fc9-8b7c-a77ec3ae6a95-var-lock" (OuterVolumeSpecName: "var-lock") pod "45d3e526-f114-4fc9-8b7c-a77ec3ae6a95" (UID: "45d3e526-f114-4fc9-8b7c-a77ec3ae6a95"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:48:54 crc kubenswrapper[4520]: I0130 06:48:54.493448 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45d3e526-f114-4fc9-8b7c-a77ec3ae6a95-kubelet-dir\") pod \"45d3e526-f114-4fc9-8b7c-a77ec3ae6a95\" (UID: \"45d3e526-f114-4fc9-8b7c-a77ec3ae6a95\") " Jan 30 06:48:54 crc kubenswrapper[4520]: I0130 06:48:54.493683 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45d3e526-f114-4fc9-8b7c-a77ec3ae6a95-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "45d3e526-f114-4fc9-8b7c-a77ec3ae6a95" (UID: "45d3e526-f114-4fc9-8b7c-a77ec3ae6a95"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:48:54 crc kubenswrapper[4520]: I0130 06:48:54.493775 4520 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/45d3e526-f114-4fc9-8b7c-a77ec3ae6a95-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:54 crc kubenswrapper[4520]: I0130 06:48:54.493788 4520 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45d3e526-f114-4fc9-8b7c-a77ec3ae6a95-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:54 crc kubenswrapper[4520]: I0130 06:48:54.498264 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45d3e526-f114-4fc9-8b7c-a77ec3ae6a95-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "45d3e526-f114-4fc9-8b7c-a77ec3ae6a95" (UID: "45d3e526-f114-4fc9-8b7c-a77ec3ae6a95"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:48:54 crc kubenswrapper[4520]: I0130 06:48:54.594404 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45d3e526-f114-4fc9-8b7c-a77ec3ae6a95-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:55 crc kubenswrapper[4520]: I0130 06:48:55.137943 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"45d3e526-f114-4fc9-8b7c-a77ec3ae6a95","Type":"ContainerDied","Data":"e1d9586d053abcbdb37a498c80e3ab7319ab06a9481e88168655b77eafa321f7"} Jan 30 06:48:55 crc kubenswrapper[4520]: I0130 06:48:55.138328 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1d9586d053abcbdb37a498c80e3ab7319ab06a9481e88168655b77eafa321f7" Jan 30 06:48:55 crc kubenswrapper[4520]: I0130 06:48:55.137999 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 06:48:55 crc kubenswrapper[4520]: I0130 06:48:55.142288 4520 status_manager.go:851] "Failed to get status for pod" podUID="45d3e526-f114-4fc9-8b7c-a77ec3ae6a95" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.87:6443: connect: connection refused" Jan 30 06:48:55 crc kubenswrapper[4520]: I0130 06:48:55.259842 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 06:48:55 crc kubenswrapper[4520]: I0130 06:48:55.260969 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:48:55 crc kubenswrapper[4520]: I0130 06:48:55.261731 4520 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.25.87:6443: connect: connection refused" Jan 30 06:48:55 crc kubenswrapper[4520]: I0130 06:48:55.262050 4520 status_manager.go:851] "Failed to get status for pod" podUID="45d3e526-f114-4fc9-8b7c-a77ec3ae6a95" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.87:6443: connect: connection refused" Jan 30 06:48:55 crc kubenswrapper[4520]: I0130 06:48:55.302575 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 06:48:55 crc kubenswrapper[4520]: I0130 06:48:55.302651 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 06:48:55 crc kubenswrapper[4520]: I0130 06:48:55.302694 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:48:55 crc kubenswrapper[4520]: I0130 06:48:55.302779 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:48:55 crc kubenswrapper[4520]: I0130 06:48:55.302834 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 06:48:55 crc kubenswrapper[4520]: I0130 06:48:55.302922 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:48:55 crc kubenswrapper[4520]: I0130 06:48:55.303136 4520 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:55 crc kubenswrapper[4520]: I0130 06:48:55.303158 4520 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:55 crc kubenswrapper[4520]: I0130 06:48:55.303167 4520 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 06:48:56 crc kubenswrapper[4520]: I0130 06:48:56.148648 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 06:48:56 crc kubenswrapper[4520]: I0130 06:48:56.152350 4520 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333" exitCode=0 Jan 30 06:48:56 crc kubenswrapper[4520]: I0130 06:48:56.152471 4520 scope.go:117] "RemoveContainer" containerID="cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782" Jan 30 06:48:56 crc kubenswrapper[4520]: I0130 06:48:56.152782 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:48:56 crc kubenswrapper[4520]: I0130 06:48:56.166025 4520 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.25.87:6443: connect: connection refused" Jan 30 06:48:56 crc kubenswrapper[4520]: I0130 06:48:56.166682 4520 status_manager.go:851] "Failed to get status for pod" podUID="45d3e526-f114-4fc9-8b7c-a77ec3ae6a95" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.87:6443: connect: connection refused" Jan 30 06:48:56 crc kubenswrapper[4520]: I0130 06:48:56.172605 4520 scope.go:117] "RemoveContainer" containerID="3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370" Jan 30 06:48:56 crc kubenswrapper[4520]: I0130 06:48:56.189616 4520 scope.go:117] "RemoveContainer" containerID="1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0" Jan 30 06:48:56 crc kubenswrapper[4520]: I0130 06:48:56.203952 4520 scope.go:117] "RemoveContainer" containerID="f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507" Jan 30 06:48:56 crc kubenswrapper[4520]: I0130 06:48:56.215782 4520 scope.go:117] "RemoveContainer" containerID="1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333" Jan 30 06:48:56 crc kubenswrapper[4520]: I0130 06:48:56.234073 4520 scope.go:117] "RemoveContainer" containerID="e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2" Jan 30 06:48:56 crc kubenswrapper[4520]: I0130 06:48:56.265509 4520 scope.go:117] "RemoveContainer" containerID="cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782" Jan 30 06:48:56 crc kubenswrapper[4520]: E0130 06:48:56.266047 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\": container with ID starting with cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782 not found: ID does not exist" containerID="cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782" Jan 30 06:48:56 crc kubenswrapper[4520]: I0130 06:48:56.266083 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782"} err="failed to get container status \"cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\": rpc error: code = NotFound desc = could not find container \"cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782\": container with ID starting with cfd988c999e8fafef1eed91f6dbdb8425ed5aa2be2ba3587eedb3c42adf60782 not found: ID does not exist" Jan 30 06:48:56 crc kubenswrapper[4520]: I0130 06:48:56.266111 4520 scope.go:117] "RemoveContainer" containerID="3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370" Jan 30 06:48:56 crc kubenswrapper[4520]: E0130 06:48:56.266676 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\": container with ID starting with 3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370 not found: ID does not exist" containerID="3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370" Jan 30 06:48:56 crc kubenswrapper[4520]: I0130 06:48:56.266740 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370"} err="failed to get container status \"3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\": rpc error: code = NotFound desc = could not find container \"3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370\": container with ID starting with 3fbb903e9f4cc3e49267ec932c808d89a696c07bbf9b774d60d84e1c66d45370 not found: ID does not exist" Jan 30 06:48:56 crc kubenswrapper[4520]: I0130 06:48:56.266755 4520 scope.go:117] "RemoveContainer" containerID="1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0" Jan 30 06:48:56 crc kubenswrapper[4520]: E0130 06:48:56.267148 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\": container with ID starting with 1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0 not found: ID does not exist" containerID="1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0" Jan 30 06:48:56 crc kubenswrapper[4520]: I0130 06:48:56.267175 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0"} err="failed to get container status \"1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\": rpc error: code = NotFound desc = could not find container \"1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0\": container with ID starting with 1d98c82f6165becd29b57451410fd6240ca2c5c70c091da1905529e322ff18d0 not found: ID does not exist" Jan 30 06:48:56 crc kubenswrapper[4520]: I0130 06:48:56.267192 4520 scope.go:117] "RemoveContainer" containerID="f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507" Jan 30 06:48:56 crc kubenswrapper[4520]: E0130 06:48:56.267540 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\": container with ID starting with f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507 not found: ID does not exist" containerID="f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507" Jan 30 06:48:56 crc kubenswrapper[4520]: I0130 06:48:56.267568 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507"} err="failed to get container status \"f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\": rpc error: code = NotFound desc = could not find container \"f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507\": container with ID starting with f14bded0e6e887261918441991dfdb3b6f97af8ee758ebb8a1d552e990de8507 not found: ID does not exist" Jan 30 06:48:56 crc kubenswrapper[4520]: I0130 06:48:56.267587 4520 scope.go:117] "RemoveContainer" containerID="1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333" Jan 30 06:48:56 crc kubenswrapper[4520]: E0130 06:48:56.267857 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\": container with ID starting with 1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333 not found: ID does not exist" containerID="1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333" Jan 30 06:48:56 crc kubenswrapper[4520]: I0130 06:48:56.267991 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333"} err="failed to get container status \"1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\": rpc error: code = NotFound desc = could not find container \"1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333\": container with ID starting with 1df8ca64f59bff3d1a46770e956cbcb2f07162d4e9fc4552ad754c6783d38333 not found: ID does not exist" Jan 30 06:48:56 crc kubenswrapper[4520]: I0130 06:48:56.268006 4520 scope.go:117] "RemoveContainer" containerID="e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2" Jan 30 06:48:56 crc kubenswrapper[4520]: E0130 06:48:56.268318 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\": container with ID starting with e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2 not found: ID does not exist" containerID="e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2" Jan 30 06:48:56 crc kubenswrapper[4520]: I0130 06:48:56.268341 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2"} err="failed to get container status \"e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\": rpc error: code = NotFound desc = could not find container \"e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2\": container with ID starting with e5b5f2cb0dd8de89e4dff0ce59b72e058912693891072e1197deef08fbfdd8e2 not found: ID does not exist" Jan 30 06:48:56 crc kubenswrapper[4520]: I0130 06:48:56.687858 4520 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.25.87:6443: connect: connection refused" Jan 30 06:48:56 crc kubenswrapper[4520]: I0130 06:48:56.688094 4520 status_manager.go:851] "Failed to get status for pod" podUID="45d3e526-f114-4fc9-8b7c-a77ec3ae6a95" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.87:6443: connect: connection refused" Jan 30 06:48:56 crc kubenswrapper[4520]: I0130 06:48:56.697625 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 30 06:48:59 crc kubenswrapper[4520]: E0130 06:48:59.456436 4520 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.25.87:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f6f767d2dfc5e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 06:48:53.276884062 +0000 UTC m=+246.905236243,LastTimestamp:2026-01-30 06:48:53.276884062 +0000 UTC m=+246.905236243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 06:49:01 crc kubenswrapper[4520]: E0130 06:49:01.770886 4520 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 192.168.25.87:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" volumeName="registry-storage" Jan 30 06:49:02 crc kubenswrapper[4520]: E0130 06:49:02.403387 4520 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.87:6443: connect: connection refused" Jan 30 06:49:02 crc kubenswrapper[4520]: E0130 06:49:02.404161 4520 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.87:6443: connect: connection refused" Jan 30 06:49:02 crc kubenswrapper[4520]: E0130 06:49:02.404729 4520 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.87:6443: connect: connection refused" Jan 30 06:49:02 crc kubenswrapper[4520]: E0130 06:49:02.404913 4520 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.87:6443: connect: connection refused" Jan 30 06:49:02 crc kubenswrapper[4520]: E0130 06:49:02.405217 4520 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.87:6443: connect: connection refused" Jan 30 06:49:02 crc kubenswrapper[4520]: I0130 06:49:02.405278 4520 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 30 06:49:02 crc kubenswrapper[4520]: E0130 06:49:02.405614 4520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.87:6443: connect: connection refused" interval="200ms" Jan 30 06:49:02 crc kubenswrapper[4520]: E0130 06:49:02.606139 4520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.87:6443: connect: connection refused" interval="400ms" Jan 30 06:49:03 crc kubenswrapper[4520]: E0130 06:49:03.007224 4520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.87:6443: connect: connection refused" interval="800ms" Jan 30 06:49:03 crc kubenswrapper[4520]: E0130 06:49:03.808611 4520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.87:6443: connect: connection refused" interval="1.6s" Jan 30 06:49:05 crc kubenswrapper[4520]: E0130 06:49:05.409183 4520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.87:6443: connect: connection refused" interval="3.2s" Jan 30 06:49:06 crc kubenswrapper[4520]: I0130 06:49:06.208635 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 06:49:06 crc kubenswrapper[4520]: I0130 06:49:06.208693 4520 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f" exitCode=1 Jan 30 06:49:06 crc kubenswrapper[4520]: I0130 06:49:06.208731 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f"} Jan 30 06:49:06 crc kubenswrapper[4520]: I0130 06:49:06.209087 4520 scope.go:117] "RemoveContainer" containerID="a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f" Jan 30 06:49:06 crc kubenswrapper[4520]: I0130 06:49:06.210102 4520 status_manager.go:851] "Failed to get status for pod" podUID="45d3e526-f114-4fc9-8b7c-a77ec3ae6a95" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.87:6443: connect: connection refused" Jan 30 06:49:06 crc kubenswrapper[4520]: I0130 06:49:06.210792 4520 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.25.87:6443: connect: connection refused" Jan 30 06:49:06 crc kubenswrapper[4520]: I0130 06:49:06.685551 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:49:06 crc kubenswrapper[4520]: I0130 06:49:06.689231 4520 status_manager.go:851] "Failed to get status for pod" podUID="45d3e526-f114-4fc9-8b7c-a77ec3ae6a95" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.87:6443: connect: connection refused" Jan 30 06:49:06 crc kubenswrapper[4520]: I0130 06:49:06.689835 4520 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.25.87:6443: connect: connection refused" Jan 30 06:49:06 crc kubenswrapper[4520]: I0130 06:49:06.690287 4520 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.25.87:6443: connect: connection refused" Jan 30 06:49:06 crc kubenswrapper[4520]: I0130 06:49:06.690547 4520 status_manager.go:851] "Failed to get status for pod" podUID="45d3e526-f114-4fc9-8b7c-a77ec3ae6a95" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.87:6443: connect: connection refused" Jan 30 06:49:06 crc kubenswrapper[4520]: I0130 06:49:06.699891 4520 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d0ff960a-01ac-4427-a870-5a981ff4628f" Jan 30 06:49:06 crc kubenswrapper[4520]: I0130 06:49:06.699939 4520 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d0ff960a-01ac-4427-a870-5a981ff4628f" Jan 30 06:49:06 crc kubenswrapper[4520]: E0130 06:49:06.700441 4520 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.25.87:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:49:06 crc kubenswrapper[4520]: I0130 06:49:06.701036 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:49:06 crc kubenswrapper[4520]: W0130 06:49:06.719025 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-456f9612ecb337332991100cf9854640774faefc35a3ab84ff18695442860ebb WatchSource:0}: Error finding container 456f9612ecb337332991100cf9854640774faefc35a3ab84ff18695442860ebb: Status 404 returned error can't find the container with id 456f9612ecb337332991100cf9854640774faefc35a3ab84ff18695442860ebb Jan 30 06:49:07 crc kubenswrapper[4520]: I0130 06:49:07.217377 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 06:49:07 crc kubenswrapper[4520]: I0130 06:49:07.217491 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"095fd428385f1cab71e804b727355747743a0ae3978f0cd7cbc8185ae4c95f5e"} Jan 30 06:49:07 crc kubenswrapper[4520]: I0130 06:49:07.218402 4520 status_manager.go:851] "Failed to get status for pod" podUID="45d3e526-f114-4fc9-8b7c-a77ec3ae6a95" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.87:6443: connect: connection refused" Jan 30 06:49:07 crc kubenswrapper[4520]: I0130 06:49:07.218944 4520 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.25.87:6443: connect: connection refused" Jan 30 06:49:07 crc kubenswrapper[4520]: I0130 06:49:07.219591 4520 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="9dc60d946eee69006fbd03e4bfbc3c1a2043f82a579da065021251f77016459c" exitCode=0 Jan 30 06:49:07 crc kubenswrapper[4520]: I0130 06:49:07.219625 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"9dc60d946eee69006fbd03e4bfbc3c1a2043f82a579da065021251f77016459c"} Jan 30 06:49:07 crc kubenswrapper[4520]: I0130 06:49:07.219647 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"456f9612ecb337332991100cf9854640774faefc35a3ab84ff18695442860ebb"} Jan 30 06:49:07 crc kubenswrapper[4520]: I0130 06:49:07.219838 4520 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d0ff960a-01ac-4427-a870-5a981ff4628f" Jan 30 06:49:07 crc kubenswrapper[4520]: I0130 06:49:07.219861 4520 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d0ff960a-01ac-4427-a870-5a981ff4628f" Jan 30 06:49:07 crc kubenswrapper[4520]: I0130 06:49:07.220496 4520 status_manager.go:851] "Failed to get status for pod" podUID="45d3e526-f114-4fc9-8b7c-a77ec3ae6a95" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.87:6443: connect: connection refused" Jan 30 06:49:07 crc kubenswrapper[4520]: E0130 06:49:07.220591 4520 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.25.87:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:49:07 crc kubenswrapper[4520]: I0130 06:49:07.220812 4520 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.25.87:6443: connect: connection refused" Jan 30 06:49:08 crc kubenswrapper[4520]: I0130 06:49:08.228445 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ec39a771e958ef67867b0e9aec5ecc9cf63c3642a50b08141a2afb5647de1461"} Jan 30 06:49:08 crc kubenswrapper[4520]: I0130 06:49:08.228795 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f49197407935a1ee1d8e320b09c6874fb9864a6a4e465178b70f4f1d105abebe"} Jan 30 06:49:08 crc kubenswrapper[4520]: I0130 06:49:08.228812 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"01a2cbda6abc21597167ddf2e65202e6819dff3925c589ef993314849153e29f"} Jan 30 06:49:08 crc kubenswrapper[4520]: I0130 06:49:08.228824 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"52bec12a7defa9c942d4c12f0886462c578c57038799a3162396b4aff7384ab1"} Jan 30 06:49:08 crc kubenswrapper[4520]: I0130 06:49:08.228832 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"34da1c11db4da4f4a293cd69c8a5209954d9e4157e97e183ff5d20519c23ca4f"} Jan 30 06:49:08 crc kubenswrapper[4520]: I0130 06:49:08.229043 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:49:08 crc kubenswrapper[4520]: I0130 06:49:08.229116 4520 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d0ff960a-01ac-4427-a870-5a981ff4628f" Jan 30 06:49:08 crc kubenswrapper[4520]: I0130 06:49:08.229132 4520 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d0ff960a-01ac-4427-a870-5a981ff4628f" Jan 30 06:49:10 crc kubenswrapper[4520]: I0130 06:49:10.211554 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 06:49:10 crc kubenswrapper[4520]: I0130 06:49:10.211820 4520 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 30 06:49:10 crc kubenswrapper[4520]: I0130 06:49:10.212041 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 30 06:49:11 crc kubenswrapper[4520]: I0130 06:49:11.702265 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:49:11 crc kubenswrapper[4520]: I0130 06:49:11.702306 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:49:11 crc kubenswrapper[4520]: I0130 06:49:11.707203 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:49:13 crc kubenswrapper[4520]: I0130 06:49:13.026625 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 06:49:13 crc kubenswrapper[4520]: I0130 06:49:13.758263 4520 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:49:14 crc kubenswrapper[4520]: I0130 06:49:14.271029 4520 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d0ff960a-01ac-4427-a870-5a981ff4628f" Jan 30 06:49:14 crc kubenswrapper[4520]: I0130 06:49:14.271062 4520 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d0ff960a-01ac-4427-a870-5a981ff4628f" Jan 30 06:49:14 crc kubenswrapper[4520]: I0130 06:49:14.275425 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:49:15 crc kubenswrapper[4520]: I0130 06:49:15.275991 4520 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d0ff960a-01ac-4427-a870-5a981ff4628f" Jan 30 06:49:15 crc kubenswrapper[4520]: I0130 06:49:15.276025 4520 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d0ff960a-01ac-4427-a870-5a981ff4628f" Jan 30 06:49:16 crc kubenswrapper[4520]: I0130 06:49:16.707864 4520 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6557c1f6-87f6-4af5-9fd2-d1e157799b0e" Jan 30 06:49:20 crc kubenswrapper[4520]: I0130 06:49:20.212107 4520 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 30 06:49:20 crc kubenswrapper[4520]: I0130 06:49:20.212538 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 30 06:49:23 crc kubenswrapper[4520]: I0130 06:49:23.623912 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 06:49:25 crc kubenswrapper[4520]: I0130 06:49:25.070655 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 06:49:25 crc kubenswrapper[4520]: I0130 06:49:25.172484 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 06:49:25 crc kubenswrapper[4520]: I0130 06:49:25.303009 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 06:49:25 crc kubenswrapper[4520]: I0130 06:49:25.318299 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 06:49:25 crc kubenswrapper[4520]: I0130 06:49:25.618837 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 06:49:25 crc kubenswrapper[4520]: I0130 06:49:25.640193 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 06:49:25 crc kubenswrapper[4520]: I0130 06:49:25.644224 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 06:49:25 crc kubenswrapper[4520]: I0130 06:49:25.662064 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 06:49:25 crc kubenswrapper[4520]: I0130 06:49:25.779497 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 06:49:25 crc kubenswrapper[4520]: I0130 06:49:25.874683 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 06:49:25 crc kubenswrapper[4520]: I0130 06:49:25.934489 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 06:49:25 crc kubenswrapper[4520]: I0130 06:49:25.977968 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 06:49:25 crc kubenswrapper[4520]: I0130 06:49:25.990379 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 06:49:26 crc kubenswrapper[4520]: I0130 06:49:26.021828 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 06:49:26 crc kubenswrapper[4520]: I0130 06:49:26.042804 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 06:49:26 crc kubenswrapper[4520]: I0130 06:49:26.058428 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 06:49:26 crc kubenswrapper[4520]: I0130 06:49:26.068429 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 06:49:26 crc kubenswrapper[4520]: I0130 06:49:26.084663 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 06:49:26 crc kubenswrapper[4520]: I0130 06:49:26.096582 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 06:49:26 crc kubenswrapper[4520]: I0130 06:49:26.127717 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 06:49:26 crc kubenswrapper[4520]: I0130 06:49:26.166298 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 06:49:26 crc kubenswrapper[4520]: I0130 06:49:26.683982 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 06:49:26 crc kubenswrapper[4520]: I0130 06:49:26.787091 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 06:49:26 crc kubenswrapper[4520]: I0130 06:49:26.798864 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 06:49:26 crc kubenswrapper[4520]: I0130 06:49:26.814038 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 06:49:27 crc kubenswrapper[4520]: I0130 06:49:27.122371 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 06:49:27 crc kubenswrapper[4520]: I0130 06:49:27.136695 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 06:49:27 crc kubenswrapper[4520]: I0130 06:49:27.239768 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 06:49:27 crc kubenswrapper[4520]: I0130 06:49:27.298865 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 06:49:27 crc kubenswrapper[4520]: I0130 06:49:27.342083 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 06:49:27 crc kubenswrapper[4520]: I0130 06:49:27.440991 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 06:49:27 crc kubenswrapper[4520]: I0130 06:49:27.488643 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 06:49:27 crc kubenswrapper[4520]: I0130 06:49:27.570570 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 06:49:27 crc kubenswrapper[4520]: I0130 06:49:27.573183 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 06:49:27 crc kubenswrapper[4520]: I0130 06:49:27.629768 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 06:49:27 crc kubenswrapper[4520]: I0130 06:49:27.641788 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 06:49:27 crc kubenswrapper[4520]: I0130 06:49:27.697100 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 06:49:27 crc kubenswrapper[4520]: I0130 06:49:27.701263 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 06:49:27 crc kubenswrapper[4520]: I0130 06:49:27.746579 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 06:49:27 crc kubenswrapper[4520]: I0130 06:49:27.806573 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 06:49:28 crc kubenswrapper[4520]: I0130 06:49:28.020221 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 06:49:28 crc kubenswrapper[4520]: I0130 06:49:28.088128 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 06:49:28 crc kubenswrapper[4520]: I0130 06:49:28.200317 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 06:49:28 crc kubenswrapper[4520]: I0130 06:49:28.209803 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 06:49:28 crc kubenswrapper[4520]: I0130 06:49:28.258662 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 06:49:28 crc kubenswrapper[4520]: I0130 06:49:28.306760 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 06:49:28 crc kubenswrapper[4520]: I0130 06:49:28.331855 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 06:49:28 crc kubenswrapper[4520]: I0130 06:49:28.400685 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 06:49:28 crc kubenswrapper[4520]: I0130 06:49:28.465481 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 06:49:28 crc kubenswrapper[4520]: I0130 06:49:28.478849 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 06:49:28 crc kubenswrapper[4520]: I0130 06:49:28.496986 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 06:49:28 crc kubenswrapper[4520]: I0130 06:49:28.658109 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 06:49:28 crc kubenswrapper[4520]: I0130 06:49:28.783136 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 06:49:28 crc kubenswrapper[4520]: I0130 06:49:28.788707 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 06:49:28 crc kubenswrapper[4520]: I0130 06:49:28.809803 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 06:49:28 crc kubenswrapper[4520]: I0130 06:49:28.855156 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 06:49:29 crc kubenswrapper[4520]: I0130 06:49:29.006130 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 06:49:29 crc kubenswrapper[4520]: I0130 06:49:29.046657 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 06:49:29 crc kubenswrapper[4520]: I0130 06:49:29.051929 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 06:49:29 crc kubenswrapper[4520]: I0130 06:49:29.089238 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 06:49:29 crc kubenswrapper[4520]: I0130 06:49:29.151947 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 06:49:29 crc kubenswrapper[4520]: I0130 06:49:29.196057 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 06:49:29 crc kubenswrapper[4520]: I0130 06:49:29.234720 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 06:49:29 crc kubenswrapper[4520]: I0130 06:49:29.297156 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 06:49:29 crc kubenswrapper[4520]: I0130 06:49:29.349150 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 06:49:29 crc kubenswrapper[4520]: I0130 06:49:29.389954 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 06:49:29 crc kubenswrapper[4520]: I0130 06:49:29.431780 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 06:49:29 crc kubenswrapper[4520]: I0130 06:49:29.463636 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 06:49:29 crc kubenswrapper[4520]: I0130 06:49:29.493877 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 06:49:29 crc kubenswrapper[4520]: I0130 06:49:29.550565 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 06:49:29 crc kubenswrapper[4520]: I0130 06:49:29.645641 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 06:49:29 crc kubenswrapper[4520]: I0130 06:49:29.723847 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 06:49:29 crc kubenswrapper[4520]: I0130 06:49:29.803120 4520 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 06:49:29 crc kubenswrapper[4520]: I0130 06:49:29.811990 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 06:49:29 crc kubenswrapper[4520]: I0130 06:49:29.812069 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 06:49:29 crc kubenswrapper[4520]: I0130 06:49:29.815891 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 06:49:29 crc kubenswrapper[4520]: I0130 06:49:29.826096 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 06:49:29 crc kubenswrapper[4520]: I0130 06:49:29.827133 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 06:49:29 crc kubenswrapper[4520]: I0130 06:49:29.831835 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=16.831817708 podStartE2EDuration="16.831817708s" podCreationTimestamp="2026-01-30 06:49:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:49:29.829190905 +0000 UTC m=+283.457543086" watchObservedRunningTime="2026-01-30 06:49:29.831817708 +0000 UTC m=+283.460169889" Jan 30 06:49:29 crc kubenswrapper[4520]: I0130 06:49:29.851371 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 06:49:29 crc kubenswrapper[4520]: I0130 06:49:29.854294 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 06:49:29 crc kubenswrapper[4520]: I0130 06:49:29.905404 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 06:49:29 crc kubenswrapper[4520]: I0130 06:49:29.907609 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 06:49:29 crc kubenswrapper[4520]: I0130 06:49:29.915690 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 06:49:29 crc kubenswrapper[4520]: I0130 06:49:29.935487 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 06:49:29 crc kubenswrapper[4520]: I0130 06:49:29.954758 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 06:49:29 crc kubenswrapper[4520]: I0130 06:49:29.989899 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 06:49:30 crc kubenswrapper[4520]: I0130 06:49:30.115531 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 06:49:30 crc kubenswrapper[4520]: I0130 06:49:30.211323 4520 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 30 06:49:30 crc kubenswrapper[4520]: I0130 06:49:30.211371 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 30 06:49:30 crc kubenswrapper[4520]: I0130 06:49:30.211421 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 06:49:30 crc kubenswrapper[4520]: I0130 06:49:30.211951 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"095fd428385f1cab71e804b727355747743a0ae3978f0cd7cbc8185ae4c95f5e"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 30 06:49:30 crc kubenswrapper[4520]: I0130 06:49:30.212042 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://095fd428385f1cab71e804b727355747743a0ae3978f0cd7cbc8185ae4c95f5e" gracePeriod=30 Jan 30 06:49:30 crc kubenswrapper[4520]: I0130 06:49:30.463330 4520 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 06:49:30 crc kubenswrapper[4520]: I0130 06:49:30.514734 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 06:49:30 crc kubenswrapper[4520]: I0130 06:49:30.575968 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 06:49:30 crc kubenswrapper[4520]: I0130 06:49:30.854694 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 06:49:30 crc kubenswrapper[4520]: I0130 06:49:30.978501 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 06:49:31 crc kubenswrapper[4520]: I0130 06:49:31.012313 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 06:49:31 crc kubenswrapper[4520]: I0130 06:49:31.096626 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 06:49:31 crc kubenswrapper[4520]: I0130 06:49:31.113633 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 06:49:31 crc kubenswrapper[4520]: I0130 06:49:31.227127 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 06:49:31 crc kubenswrapper[4520]: I0130 06:49:31.249952 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 06:49:31 crc kubenswrapper[4520]: I0130 06:49:31.256593 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 06:49:31 crc kubenswrapper[4520]: I0130 06:49:31.267816 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 06:49:31 crc kubenswrapper[4520]: I0130 06:49:31.321595 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 06:49:31 crc kubenswrapper[4520]: I0130 06:49:31.338668 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 06:49:31 crc kubenswrapper[4520]: I0130 06:49:31.505352 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 06:49:31 crc kubenswrapper[4520]: I0130 06:49:31.650673 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 06:49:31 crc kubenswrapper[4520]: I0130 06:49:31.668057 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 06:49:31 crc kubenswrapper[4520]: I0130 06:49:31.717316 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 06:49:31 crc kubenswrapper[4520]: I0130 06:49:31.738730 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 06:49:31 crc kubenswrapper[4520]: I0130 06:49:31.788471 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 06:49:31 crc kubenswrapper[4520]: I0130 06:49:31.853044 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 06:49:31 crc kubenswrapper[4520]: I0130 06:49:31.915382 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 06:49:31 crc kubenswrapper[4520]: I0130 06:49:31.931737 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 06:49:32 crc kubenswrapper[4520]: I0130 06:49:32.048493 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 06:49:32 crc kubenswrapper[4520]: I0130 06:49:32.065933 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 06:49:32 crc kubenswrapper[4520]: I0130 06:49:32.089845 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 06:49:32 crc kubenswrapper[4520]: I0130 06:49:32.100104 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 06:49:32 crc kubenswrapper[4520]: I0130 06:49:32.233288 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 06:49:32 crc kubenswrapper[4520]: I0130 06:49:32.235489 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 06:49:32 crc kubenswrapper[4520]: I0130 06:49:32.273119 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 06:49:32 crc kubenswrapper[4520]: I0130 06:49:32.304472 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 06:49:32 crc kubenswrapper[4520]: I0130 06:49:32.360326 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 06:49:32 crc kubenswrapper[4520]: I0130 06:49:32.377812 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 06:49:32 crc kubenswrapper[4520]: I0130 06:49:32.444404 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 06:49:32 crc kubenswrapper[4520]: I0130 06:49:32.453098 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 06:49:32 crc kubenswrapper[4520]: I0130 06:49:32.480191 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 06:49:32 crc kubenswrapper[4520]: I0130 06:49:32.540292 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 06:49:32 crc kubenswrapper[4520]: I0130 06:49:32.583245 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 06:49:32 crc kubenswrapper[4520]: I0130 06:49:32.652267 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 06:49:32 crc kubenswrapper[4520]: I0130 06:49:32.764465 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 06:49:32 crc kubenswrapper[4520]: I0130 06:49:32.769666 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 06:49:32 crc kubenswrapper[4520]: I0130 06:49:32.784704 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 06:49:32 crc kubenswrapper[4520]: I0130 06:49:32.785241 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 06:49:32 crc kubenswrapper[4520]: I0130 06:49:32.795855 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 06:49:32 crc kubenswrapper[4520]: I0130 06:49:32.868804 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 06:49:32 crc kubenswrapper[4520]: I0130 06:49:32.875780 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 06:49:32 crc kubenswrapper[4520]: I0130 06:49:32.933089 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 06:49:32 crc kubenswrapper[4520]: I0130 06:49:32.980365 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 06:49:32 crc kubenswrapper[4520]: I0130 06:49:32.988666 4520 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 06:49:33 crc kubenswrapper[4520]: I0130 06:49:33.022903 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 06:49:33 crc kubenswrapper[4520]: I0130 06:49:33.042255 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 06:49:33 crc kubenswrapper[4520]: I0130 06:49:33.183207 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 06:49:33 crc kubenswrapper[4520]: I0130 06:49:33.249486 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 06:49:33 crc kubenswrapper[4520]: I0130 06:49:33.277379 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 06:49:33 crc kubenswrapper[4520]: I0130 06:49:33.302075 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 06:49:33 crc kubenswrapper[4520]: I0130 06:49:33.302311 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 06:49:33 crc kubenswrapper[4520]: I0130 06:49:33.376109 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 06:49:33 crc kubenswrapper[4520]: I0130 06:49:33.379115 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 06:49:33 crc kubenswrapper[4520]: I0130 06:49:33.447809 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 06:49:33 crc kubenswrapper[4520]: I0130 06:49:33.752092 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 06:49:33 crc kubenswrapper[4520]: I0130 06:49:33.787181 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 06:49:33 crc kubenswrapper[4520]: I0130 06:49:33.854952 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 06:49:33 crc kubenswrapper[4520]: I0130 06:49:33.892666 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 06:49:34 crc kubenswrapper[4520]: I0130 06:49:34.001583 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 06:49:34 crc kubenswrapper[4520]: I0130 06:49:34.022262 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 06:49:34 crc kubenswrapper[4520]: I0130 06:49:34.045560 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 06:49:34 crc kubenswrapper[4520]: I0130 06:49:34.083382 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 06:49:34 crc kubenswrapper[4520]: I0130 06:49:34.104001 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 06:49:34 crc kubenswrapper[4520]: I0130 06:49:34.122729 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 06:49:34 crc kubenswrapper[4520]: I0130 06:49:34.181376 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 06:49:34 crc kubenswrapper[4520]: I0130 06:49:34.231612 4520 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 06:49:34 crc kubenswrapper[4520]: I0130 06:49:34.262412 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 06:49:34 crc kubenswrapper[4520]: I0130 06:49:34.266948 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 06:49:34 crc kubenswrapper[4520]: I0130 06:49:34.268881 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 06:49:34 crc kubenswrapper[4520]: I0130 06:49:34.343043 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 06:49:34 crc kubenswrapper[4520]: I0130 06:49:34.371291 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 06:49:34 crc kubenswrapper[4520]: I0130 06:49:34.390565 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 06:49:34 crc kubenswrapper[4520]: I0130 06:49:34.394535 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 06:49:34 crc kubenswrapper[4520]: I0130 06:49:34.412680 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 06:49:34 crc kubenswrapper[4520]: I0130 06:49:34.428280 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 06:49:34 crc kubenswrapper[4520]: I0130 06:49:34.490713 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 06:49:34 crc kubenswrapper[4520]: I0130 06:49:34.565188 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 06:49:34 crc kubenswrapper[4520]: I0130 06:49:34.608258 4520 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 06:49:34 crc kubenswrapper[4520]: I0130 06:49:34.620078 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 06:49:34 crc kubenswrapper[4520]: I0130 06:49:34.633762 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 06:49:34 crc kubenswrapper[4520]: I0130 06:49:34.731554 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 06:49:34 crc kubenswrapper[4520]: I0130 06:49:34.782046 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 06:49:34 crc kubenswrapper[4520]: I0130 06:49:34.809471 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 06:49:34 crc kubenswrapper[4520]: I0130 06:49:34.809681 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 06:49:34 crc kubenswrapper[4520]: I0130 06:49:34.818270 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 06:49:34 crc kubenswrapper[4520]: I0130 06:49:34.874499 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 06:49:34 crc kubenswrapper[4520]: I0130 06:49:34.898736 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 06:49:34 crc kubenswrapper[4520]: I0130 06:49:34.969435 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 06:49:35 crc kubenswrapper[4520]: I0130 06:49:35.061306 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 06:49:35 crc kubenswrapper[4520]: I0130 06:49:35.064225 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 06:49:35 crc kubenswrapper[4520]: I0130 06:49:35.108130 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 06:49:35 crc kubenswrapper[4520]: I0130 06:49:35.146381 4520 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 06:49:35 crc kubenswrapper[4520]: I0130 06:49:35.146670 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://60bf6a44bfcf75b17c0231f4ab4d0f6b981dbe9533aa7873bea762108394eaec" gracePeriod=5 Jan 30 06:49:35 crc kubenswrapper[4520]: I0130 06:49:35.171488 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 06:49:35 crc kubenswrapper[4520]: I0130 06:49:35.200371 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 06:49:35 crc kubenswrapper[4520]: I0130 06:49:35.256326 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 06:49:35 crc kubenswrapper[4520]: I0130 06:49:35.294945 4520 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 06:49:35 crc kubenswrapper[4520]: I0130 06:49:35.317405 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 06:49:35 crc kubenswrapper[4520]: I0130 06:49:35.514480 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 06:49:35 crc kubenswrapper[4520]: I0130 06:49:35.682453 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 06:49:35 crc kubenswrapper[4520]: I0130 06:49:35.760324 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 06:49:35 crc kubenswrapper[4520]: I0130 06:49:35.876791 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 06:49:35 crc kubenswrapper[4520]: I0130 06:49:35.906496 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 06:49:35 crc kubenswrapper[4520]: I0130 06:49:35.913448 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 06:49:35 crc kubenswrapper[4520]: I0130 06:49:35.931503 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 06:49:35 crc kubenswrapper[4520]: I0130 06:49:35.951159 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 06:49:36 crc kubenswrapper[4520]: I0130 06:49:36.182477 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 06:49:36 crc kubenswrapper[4520]: I0130 06:49:36.289830 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 06:49:36 crc kubenswrapper[4520]: I0130 06:49:36.512385 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 06:49:36 crc kubenswrapper[4520]: I0130 06:49:36.522109 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 06:49:36 crc kubenswrapper[4520]: I0130 06:49:36.527971 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 06:49:36 crc kubenswrapper[4520]: I0130 06:49:36.604504 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 06:49:36 crc kubenswrapper[4520]: I0130 06:49:36.662941 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 06:49:36 crc kubenswrapper[4520]: I0130 06:49:36.719886 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 06:49:36 crc kubenswrapper[4520]: I0130 06:49:36.722763 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 06:49:36 crc kubenswrapper[4520]: I0130 06:49:36.732450 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 06:49:36 crc kubenswrapper[4520]: I0130 06:49:36.918490 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 06:49:36 crc kubenswrapper[4520]: I0130 06:49:36.923912 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 06:49:36 crc kubenswrapper[4520]: I0130 06:49:36.943645 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 06:49:37 crc kubenswrapper[4520]: I0130 06:49:37.026596 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 06:49:37 crc kubenswrapper[4520]: I0130 06:49:37.034919 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 06:49:37 crc kubenswrapper[4520]: I0130 06:49:37.095296 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 06:49:37 crc kubenswrapper[4520]: I0130 06:49:37.255385 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 06:49:37 crc kubenswrapper[4520]: I0130 06:49:37.264091 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 06:49:37 crc kubenswrapper[4520]: I0130 06:49:37.302414 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 06:49:37 crc kubenswrapper[4520]: I0130 06:49:37.351857 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 06:49:37 crc kubenswrapper[4520]: I0130 06:49:37.402922 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 06:49:37 crc kubenswrapper[4520]: I0130 06:49:37.415146 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 06:49:37 crc kubenswrapper[4520]: I0130 06:49:37.448995 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 06:49:37 crc kubenswrapper[4520]: I0130 06:49:37.452601 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 06:49:37 crc kubenswrapper[4520]: I0130 06:49:37.456089 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 06:49:37 crc kubenswrapper[4520]: I0130 06:49:37.704563 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 06:49:37 crc kubenswrapper[4520]: I0130 06:49:37.730864 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 06:49:37 crc kubenswrapper[4520]: I0130 06:49:37.937290 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 06:49:38 crc kubenswrapper[4520]: I0130 06:49:38.113125 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 06:49:38 crc kubenswrapper[4520]: I0130 06:49:38.201496 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 06:49:38 crc kubenswrapper[4520]: I0130 06:49:38.214908 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 06:49:38 crc kubenswrapper[4520]: I0130 06:49:38.239317 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 06:49:38 crc kubenswrapper[4520]: I0130 06:49:38.264120 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 06:49:38 crc kubenswrapper[4520]: I0130 06:49:38.316568 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 06:49:38 crc kubenswrapper[4520]: I0130 06:49:38.342745 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 06:49:38 crc kubenswrapper[4520]: I0130 06:49:38.418376 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 06:49:38 crc kubenswrapper[4520]: I0130 06:49:38.491370 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 06:49:38 crc kubenswrapper[4520]: I0130 06:49:38.583408 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 06:49:38 crc kubenswrapper[4520]: I0130 06:49:38.625272 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 06:49:38 crc kubenswrapper[4520]: I0130 06:49:38.630730 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 06:49:38 crc kubenswrapper[4520]: I0130 06:49:38.680193 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 06:49:38 crc kubenswrapper[4520]: I0130 06:49:38.714024 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 06:49:38 crc kubenswrapper[4520]: I0130 06:49:38.734427 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 06:49:38 crc kubenswrapper[4520]: I0130 06:49:38.827461 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 06:49:39 crc kubenswrapper[4520]: I0130 06:49:39.123565 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 06:49:39 crc kubenswrapper[4520]: I0130 06:49:39.165648 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 06:49:39 crc kubenswrapper[4520]: I0130 06:49:39.193699 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 06:49:39 crc kubenswrapper[4520]: I0130 06:49:39.524299 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 06:49:39 crc kubenswrapper[4520]: I0130 06:49:39.790956 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 06:49:39 crc kubenswrapper[4520]: I0130 06:49:39.810071 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 06:49:39 crc kubenswrapper[4520]: I0130 06:49:39.933279 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 06:49:39 crc kubenswrapper[4520]: I0130 06:49:39.985791 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 06:49:40 crc kubenswrapper[4520]: I0130 06:49:40.379623 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 06:49:40 crc kubenswrapper[4520]: I0130 06:49:40.418152 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 06:49:40 crc kubenswrapper[4520]: I0130 06:49:40.418201 4520 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="60bf6a44bfcf75b17c0231f4ab4d0f6b981dbe9533aa7873bea762108394eaec" exitCode=137 Jan 30 06:49:40 crc kubenswrapper[4520]: I0130 06:49:40.435402 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 06:49:40 crc kubenswrapper[4520]: I0130 06:49:40.674546 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 06:49:40 crc kubenswrapper[4520]: I0130 06:49:40.697197 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 06:49:40 crc kubenswrapper[4520]: I0130 06:49:40.697257 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 06:49:40 crc kubenswrapper[4520]: I0130 06:49:40.746294 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 06:49:40 crc kubenswrapper[4520]: I0130 06:49:40.746323 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 06:49:40 crc kubenswrapper[4520]: I0130 06:49:40.746355 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 06:49:40 crc kubenswrapper[4520]: I0130 06:49:40.746399 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 06:49:40 crc kubenswrapper[4520]: I0130 06:49:40.746552 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 06:49:40 crc kubenswrapper[4520]: I0130 06:49:40.746476 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:49:40 crc kubenswrapper[4520]: I0130 06:49:40.746476 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:49:40 crc kubenswrapper[4520]: I0130 06:49:40.746495 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:49:40 crc kubenswrapper[4520]: I0130 06:49:40.746924 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:49:40 crc kubenswrapper[4520]: I0130 06:49:40.752776 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:49:40 crc kubenswrapper[4520]: I0130 06:49:40.843167 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 06:49:40 crc kubenswrapper[4520]: I0130 06:49:40.847280 4520 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 06:49:40 crc kubenswrapper[4520]: I0130 06:49:40.847349 4520 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 06:49:40 crc kubenswrapper[4520]: I0130 06:49:40.847409 4520 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 30 06:49:40 crc kubenswrapper[4520]: I0130 06:49:40.847458 4520 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 30 06:49:40 crc kubenswrapper[4520]: I0130 06:49:40.847501 4520 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 06:49:41 crc kubenswrapper[4520]: I0130 06:49:41.425275 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 06:49:41 crc kubenswrapper[4520]: I0130 06:49:41.425350 4520 scope.go:117] "RemoveContainer" containerID="60bf6a44bfcf75b17c0231f4ab4d0f6b981dbe9533aa7873bea762108394eaec" Jan 30 06:49:41 crc kubenswrapper[4520]: I0130 06:49:41.425465 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 06:49:42 crc kubenswrapper[4520]: I0130 06:49:42.375584 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 06:49:42 crc kubenswrapper[4520]: I0130 06:49:42.674654 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 06:49:42 crc kubenswrapper[4520]: I0130 06:49:42.691992 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 30 06:49:46 crc kubenswrapper[4520]: I0130 06:49:46.572386 4520 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 30 06:50:00 crc kubenswrapper[4520]: I0130 06:50:00.517421 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 30 06:50:00 crc kubenswrapper[4520]: I0130 06:50:00.519560 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 06:50:00 crc kubenswrapper[4520]: I0130 06:50:00.519602 4520 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="095fd428385f1cab71e804b727355747743a0ae3978f0cd7cbc8185ae4c95f5e" exitCode=137 Jan 30 06:50:00 crc kubenswrapper[4520]: I0130 06:50:00.519635 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"095fd428385f1cab71e804b727355747743a0ae3978f0cd7cbc8185ae4c95f5e"} Jan 30 06:50:00 crc kubenswrapper[4520]: I0130 06:50:00.519676 4520 scope.go:117] "RemoveContainer" containerID="a2fc4983b8e4d02eb1dc38b8533f0608e955a7b49401120ab3e0ea70e2b3861f" Jan 30 06:50:01 crc kubenswrapper[4520]: I0130 06:50:01.525865 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 30 06:50:01 crc kubenswrapper[4520]: I0130 06:50:01.526571 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"73eab13be3b4b866c1391779044f461309da79d260eeb6a399e5674cacf332fe"} Jan 30 06:50:03 crc kubenswrapper[4520]: I0130 06:50:03.026416 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 06:50:03 crc kubenswrapper[4520]: I0130 06:50:03.537976 4520 generic.go:334] "Generic (PLEG): container finished" podID="8a370c00-eeac-4281-8793-33a8c2d4b9e2" containerID="ee91398acbede99a18c42f9f59c83c048aee4ac1c05efb2c5540ac7e734f4048" exitCode=0 Jan 30 06:50:03 crc kubenswrapper[4520]: I0130 06:50:03.538061 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-b9tbv" event={"ID":"8a370c00-eeac-4281-8793-33a8c2d4b9e2","Type":"ContainerDied","Data":"ee91398acbede99a18c42f9f59c83c048aee4ac1c05efb2c5540ac7e734f4048"} Jan 30 06:50:03 crc kubenswrapper[4520]: I0130 06:50:03.538818 4520 scope.go:117] "RemoveContainer" containerID="ee91398acbede99a18c42f9f59c83c048aee4ac1c05efb2c5540ac7e734f4048" Jan 30 06:50:04 crc kubenswrapper[4520]: I0130 06:50:04.545427 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-b9tbv" event={"ID":"8a370c00-eeac-4281-8793-33a8c2d4b9e2","Type":"ContainerStarted","Data":"48f70aa961e851b3122cfca2c32029dee671cd8aa3b163dd7e01623856ee60a2"} Jan 30 06:50:04 crc kubenswrapper[4520]: I0130 06:50:04.546145 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-b9tbv" Jan 30 06:50:04 crc kubenswrapper[4520]: I0130 06:50:04.549980 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-b9tbv" Jan 30 06:50:10 crc kubenswrapper[4520]: I0130 06:50:10.211846 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 06:50:10 crc kubenswrapper[4520]: I0130 06:50:10.215150 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 06:50:10 crc kubenswrapper[4520]: I0130 06:50:10.582536 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 06:50:57 crc kubenswrapper[4520]: I0130 06:50:57.793688 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:50:57 crc kubenswrapper[4520]: I0130 06:50:57.794380 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:50:58 crc kubenswrapper[4520]: I0130 06:50:58.286097 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-z4wmg"] Jan 30 06:50:58 crc kubenswrapper[4520]: E0130 06:50:58.286349 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45d3e526-f114-4fc9-8b7c-a77ec3ae6a95" containerName="installer" Jan 30 06:50:58 crc kubenswrapper[4520]: I0130 06:50:58.286367 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="45d3e526-f114-4fc9-8b7c-a77ec3ae6a95" containerName="installer" Jan 30 06:50:58 crc kubenswrapper[4520]: E0130 06:50:58.286381 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 06:50:58 crc kubenswrapper[4520]: I0130 06:50:58.286389 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 06:50:58 crc kubenswrapper[4520]: I0130 06:50:58.286488 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="45d3e526-f114-4fc9-8b7c-a77ec3ae6a95" containerName="installer" Jan 30 06:50:58 crc kubenswrapper[4520]: I0130 06:50:58.286506 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 06:50:58 crc kubenswrapper[4520]: I0130 06:50:58.286910 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-z4wmg" Jan 30 06:50:58 crc kubenswrapper[4520]: I0130 06:50:58.306230 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-z4wmg"] Jan 30 06:50:58 crc kubenswrapper[4520]: I0130 06:50:58.442381 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8090ef3c-b06e-4c77-a16d-ff11cc047fc5-registry-tls\") pod \"image-registry-66df7c8f76-z4wmg\" (UID: \"8090ef3c-b06e-4c77-a16d-ff11cc047fc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-z4wmg" Jan 30 06:50:58 crc kubenswrapper[4520]: I0130 06:50:58.442429 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8090ef3c-b06e-4c77-a16d-ff11cc047fc5-ca-trust-extracted\") pod \"image-registry-66df7c8f76-z4wmg\" (UID: \"8090ef3c-b06e-4c77-a16d-ff11cc047fc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-z4wmg" Jan 30 06:50:58 crc kubenswrapper[4520]: I0130 06:50:58.442450 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8090ef3c-b06e-4c77-a16d-ff11cc047fc5-registry-certificates\") pod \"image-registry-66df7c8f76-z4wmg\" (UID: \"8090ef3c-b06e-4c77-a16d-ff11cc047fc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-z4wmg" Jan 30 06:50:58 crc kubenswrapper[4520]: I0130 06:50:58.442477 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgf8r\" (UniqueName: \"kubernetes.io/projected/8090ef3c-b06e-4c77-a16d-ff11cc047fc5-kube-api-access-xgf8r\") pod \"image-registry-66df7c8f76-z4wmg\" (UID: \"8090ef3c-b06e-4c77-a16d-ff11cc047fc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-z4wmg" Jan 30 06:50:58 crc kubenswrapper[4520]: I0130 06:50:58.442962 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8090ef3c-b06e-4c77-a16d-ff11cc047fc5-bound-sa-token\") pod \"image-registry-66df7c8f76-z4wmg\" (UID: \"8090ef3c-b06e-4c77-a16d-ff11cc047fc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-z4wmg" Jan 30 06:50:58 crc kubenswrapper[4520]: I0130 06:50:58.443088 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8090ef3c-b06e-4c77-a16d-ff11cc047fc5-trusted-ca\") pod \"image-registry-66df7c8f76-z4wmg\" (UID: \"8090ef3c-b06e-4c77-a16d-ff11cc047fc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-z4wmg" Jan 30 06:50:58 crc kubenswrapper[4520]: I0130 06:50:58.443196 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8090ef3c-b06e-4c77-a16d-ff11cc047fc5-installation-pull-secrets\") pod \"image-registry-66df7c8f76-z4wmg\" (UID: \"8090ef3c-b06e-4c77-a16d-ff11cc047fc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-z4wmg" Jan 30 06:50:58 crc kubenswrapper[4520]: I0130 06:50:58.443664 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-z4wmg\" (UID: \"8090ef3c-b06e-4c77-a16d-ff11cc047fc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-z4wmg" Jan 30 06:50:58 crc kubenswrapper[4520]: I0130 06:50:58.465574 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-z4wmg\" (UID: \"8090ef3c-b06e-4c77-a16d-ff11cc047fc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-z4wmg" Jan 30 06:50:58 crc kubenswrapper[4520]: I0130 06:50:58.545473 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8090ef3c-b06e-4c77-a16d-ff11cc047fc5-registry-tls\") pod \"image-registry-66df7c8f76-z4wmg\" (UID: \"8090ef3c-b06e-4c77-a16d-ff11cc047fc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-z4wmg" Jan 30 06:50:58 crc kubenswrapper[4520]: I0130 06:50:58.545543 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8090ef3c-b06e-4c77-a16d-ff11cc047fc5-ca-trust-extracted\") pod \"image-registry-66df7c8f76-z4wmg\" (UID: \"8090ef3c-b06e-4c77-a16d-ff11cc047fc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-z4wmg" Jan 30 06:50:58 crc kubenswrapper[4520]: I0130 06:50:58.545575 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8090ef3c-b06e-4c77-a16d-ff11cc047fc5-registry-certificates\") pod \"image-registry-66df7c8f76-z4wmg\" (UID: \"8090ef3c-b06e-4c77-a16d-ff11cc047fc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-z4wmg" Jan 30 06:50:58 crc kubenswrapper[4520]: I0130 06:50:58.545608 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgf8r\" (UniqueName: \"kubernetes.io/projected/8090ef3c-b06e-4c77-a16d-ff11cc047fc5-kube-api-access-xgf8r\") pod \"image-registry-66df7c8f76-z4wmg\" (UID: \"8090ef3c-b06e-4c77-a16d-ff11cc047fc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-z4wmg" Jan 30 06:50:58 crc kubenswrapper[4520]: I0130 06:50:58.545865 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8090ef3c-b06e-4c77-a16d-ff11cc047fc5-bound-sa-token\") pod \"image-registry-66df7c8f76-z4wmg\" (UID: \"8090ef3c-b06e-4c77-a16d-ff11cc047fc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-z4wmg" Jan 30 06:50:58 crc kubenswrapper[4520]: I0130 06:50:58.545892 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8090ef3c-b06e-4c77-a16d-ff11cc047fc5-trusted-ca\") pod \"image-registry-66df7c8f76-z4wmg\" (UID: \"8090ef3c-b06e-4c77-a16d-ff11cc047fc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-z4wmg" Jan 30 06:50:58 crc kubenswrapper[4520]: I0130 06:50:58.545934 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8090ef3c-b06e-4c77-a16d-ff11cc047fc5-installation-pull-secrets\") pod \"image-registry-66df7c8f76-z4wmg\" (UID: \"8090ef3c-b06e-4c77-a16d-ff11cc047fc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-z4wmg" Jan 30 06:50:58 crc kubenswrapper[4520]: I0130 06:50:58.546337 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8090ef3c-b06e-4c77-a16d-ff11cc047fc5-ca-trust-extracted\") pod \"image-registry-66df7c8f76-z4wmg\" (UID: \"8090ef3c-b06e-4c77-a16d-ff11cc047fc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-z4wmg" Jan 30 06:50:58 crc kubenswrapper[4520]: I0130 06:50:58.547127 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8090ef3c-b06e-4c77-a16d-ff11cc047fc5-registry-certificates\") pod \"image-registry-66df7c8f76-z4wmg\" (UID: \"8090ef3c-b06e-4c77-a16d-ff11cc047fc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-z4wmg" Jan 30 06:50:58 crc kubenswrapper[4520]: I0130 06:50:58.547836 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8090ef3c-b06e-4c77-a16d-ff11cc047fc5-trusted-ca\") pod \"image-registry-66df7c8f76-z4wmg\" (UID: \"8090ef3c-b06e-4c77-a16d-ff11cc047fc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-z4wmg" Jan 30 06:50:58 crc kubenswrapper[4520]: I0130 06:50:58.552866 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8090ef3c-b06e-4c77-a16d-ff11cc047fc5-installation-pull-secrets\") pod \"image-registry-66df7c8f76-z4wmg\" (UID: \"8090ef3c-b06e-4c77-a16d-ff11cc047fc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-z4wmg" Jan 30 06:50:58 crc kubenswrapper[4520]: I0130 06:50:58.554272 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8090ef3c-b06e-4c77-a16d-ff11cc047fc5-registry-tls\") pod \"image-registry-66df7c8f76-z4wmg\" (UID: \"8090ef3c-b06e-4c77-a16d-ff11cc047fc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-z4wmg" Jan 30 06:50:58 crc kubenswrapper[4520]: I0130 06:50:58.564285 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8090ef3c-b06e-4c77-a16d-ff11cc047fc5-bound-sa-token\") pod \"image-registry-66df7c8f76-z4wmg\" (UID: \"8090ef3c-b06e-4c77-a16d-ff11cc047fc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-z4wmg" Jan 30 06:50:58 crc kubenswrapper[4520]: I0130 06:50:58.565049 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgf8r\" (UniqueName: \"kubernetes.io/projected/8090ef3c-b06e-4c77-a16d-ff11cc047fc5-kube-api-access-xgf8r\") pod \"image-registry-66df7c8f76-z4wmg\" (UID: \"8090ef3c-b06e-4c77-a16d-ff11cc047fc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-z4wmg" Jan 30 06:50:58 crc kubenswrapper[4520]: I0130 06:50:58.599880 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-z4wmg" Jan 30 06:50:58 crc kubenswrapper[4520]: I0130 06:50:58.981368 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-z4wmg"] Jan 30 06:50:59 crc kubenswrapper[4520]: I0130 06:50:59.843313 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-z4wmg" event={"ID":"8090ef3c-b06e-4c77-a16d-ff11cc047fc5","Type":"ContainerStarted","Data":"77261ca1657899514988e007b829df8cb3a674ea1796157bfea9f9d92199e616"} Jan 30 06:50:59 crc kubenswrapper[4520]: I0130 06:50:59.843731 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-z4wmg" event={"ID":"8090ef3c-b06e-4c77-a16d-ff11cc047fc5","Type":"ContainerStarted","Data":"2fd48554feb8c198e1c492bad4290ac1b5209329532cdaaaa4572a381cd551a8"} Jan 30 06:50:59 crc kubenswrapper[4520]: I0130 06:50:59.843804 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-z4wmg" Jan 30 06:50:59 crc kubenswrapper[4520]: I0130 06:50:59.870708 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-z4wmg" podStartSLOduration=1.870683501 podStartE2EDuration="1.870683501s" podCreationTimestamp="2026-01-30 06:50:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:50:59.864061942 +0000 UTC m=+373.492414122" watchObservedRunningTime="2026-01-30 06:50:59.870683501 +0000 UTC m=+373.499035683" Jan 30 06:51:18 crc kubenswrapper[4520]: I0130 06:51:18.611329 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-z4wmg" Jan 30 06:51:18 crc kubenswrapper[4520]: I0130 06:51:18.666113 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-54cnn"] Jan 30 06:51:27 crc kubenswrapper[4520]: I0130 06:51:27.793634 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:51:27 crc kubenswrapper[4520]: I0130 06:51:27.794017 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:51:43 crc kubenswrapper[4520]: I0130 06:51:43.695751 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" podUID="28a7e740-6b3e-49a1-ac09-f802137f6a84" containerName="registry" containerID="cri-o://45d655d87176c357d8ffd89ce3d037ca4503d27d5b13c51ac8375b2bbf76fdb2" gracePeriod=30 Jan 30 06:51:43 crc kubenswrapper[4520]: I0130 06:51:43.986138 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:51:44 crc kubenswrapper[4520]: I0130 06:51:44.093463 4520 generic.go:334] "Generic (PLEG): container finished" podID="28a7e740-6b3e-49a1-ac09-f802137f6a84" containerID="45d655d87176c357d8ffd89ce3d037ca4503d27d5b13c51ac8375b2bbf76fdb2" exitCode=0 Jan 30 06:51:44 crc kubenswrapper[4520]: I0130 06:51:44.093509 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" event={"ID":"28a7e740-6b3e-49a1-ac09-f802137f6a84","Type":"ContainerDied","Data":"45d655d87176c357d8ffd89ce3d037ca4503d27d5b13c51ac8375b2bbf76fdb2"} Jan 30 06:51:44 crc kubenswrapper[4520]: I0130 06:51:44.093548 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" Jan 30 06:51:44 crc kubenswrapper[4520]: I0130 06:51:44.093571 4520 scope.go:117] "RemoveContainer" containerID="45d655d87176c357d8ffd89ce3d037ca4503d27d5b13c51ac8375b2bbf76fdb2" Jan 30 06:51:44 crc kubenswrapper[4520]: I0130 06:51:44.093559 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-54cnn" event={"ID":"28a7e740-6b3e-49a1-ac09-f802137f6a84","Type":"ContainerDied","Data":"534b03bfd48e13702765f71e86687d5a1b2255e4fea140d639a3f97782c2d4a8"} Jan 30 06:51:44 crc kubenswrapper[4520]: I0130 06:51:44.114082 4520 scope.go:117] "RemoveContainer" containerID="45d655d87176c357d8ffd89ce3d037ca4503d27d5b13c51ac8375b2bbf76fdb2" Jan 30 06:51:44 crc kubenswrapper[4520]: E0130 06:51:44.114605 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45d655d87176c357d8ffd89ce3d037ca4503d27d5b13c51ac8375b2bbf76fdb2\": container with ID starting with 45d655d87176c357d8ffd89ce3d037ca4503d27d5b13c51ac8375b2bbf76fdb2 not found: ID does not exist" containerID="45d655d87176c357d8ffd89ce3d037ca4503d27d5b13c51ac8375b2bbf76fdb2" Jan 30 06:51:44 crc kubenswrapper[4520]: I0130 06:51:44.114643 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45d655d87176c357d8ffd89ce3d037ca4503d27d5b13c51ac8375b2bbf76fdb2"} err="failed to get container status \"45d655d87176c357d8ffd89ce3d037ca4503d27d5b13c51ac8375b2bbf76fdb2\": rpc error: code = NotFound desc = could not find container \"45d655d87176c357d8ffd89ce3d037ca4503d27d5b13c51ac8375b2bbf76fdb2\": container with ID starting with 45d655d87176c357d8ffd89ce3d037ca4503d27d5b13c51ac8375b2bbf76fdb2 not found: ID does not exist" Jan 30 06:51:44 crc kubenswrapper[4520]: I0130 06:51:44.139742 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/28a7e740-6b3e-49a1-ac09-f802137f6a84-trusted-ca\") pod \"28a7e740-6b3e-49a1-ac09-f802137f6a84\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " Jan 30 06:51:44 crc kubenswrapper[4520]: I0130 06:51:44.139844 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/28a7e740-6b3e-49a1-ac09-f802137f6a84-installation-pull-secrets\") pod \"28a7e740-6b3e-49a1-ac09-f802137f6a84\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " Jan 30 06:51:44 crc kubenswrapper[4520]: I0130 06:51:44.139876 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/28a7e740-6b3e-49a1-ac09-f802137f6a84-bound-sa-token\") pod \"28a7e740-6b3e-49a1-ac09-f802137f6a84\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " Jan 30 06:51:44 crc kubenswrapper[4520]: I0130 06:51:44.140089 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"28a7e740-6b3e-49a1-ac09-f802137f6a84\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " Jan 30 06:51:44 crc kubenswrapper[4520]: I0130 06:51:44.140165 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/28a7e740-6b3e-49a1-ac09-f802137f6a84-registry-tls\") pod \"28a7e740-6b3e-49a1-ac09-f802137f6a84\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " Jan 30 06:51:44 crc kubenswrapper[4520]: I0130 06:51:44.140214 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/28a7e740-6b3e-49a1-ac09-f802137f6a84-registry-certificates\") pod \"28a7e740-6b3e-49a1-ac09-f802137f6a84\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " Jan 30 06:51:44 crc kubenswrapper[4520]: I0130 06:51:44.140252 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rlsg\" (UniqueName: \"kubernetes.io/projected/28a7e740-6b3e-49a1-ac09-f802137f6a84-kube-api-access-9rlsg\") pod \"28a7e740-6b3e-49a1-ac09-f802137f6a84\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " Jan 30 06:51:44 crc kubenswrapper[4520]: I0130 06:51:44.140280 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/28a7e740-6b3e-49a1-ac09-f802137f6a84-ca-trust-extracted\") pod \"28a7e740-6b3e-49a1-ac09-f802137f6a84\" (UID: \"28a7e740-6b3e-49a1-ac09-f802137f6a84\") " Jan 30 06:51:44 crc kubenswrapper[4520]: I0130 06:51:44.140613 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28a7e740-6b3e-49a1-ac09-f802137f6a84-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "28a7e740-6b3e-49a1-ac09-f802137f6a84" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:51:44 crc kubenswrapper[4520]: I0130 06:51:44.140739 4520 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/28a7e740-6b3e-49a1-ac09-f802137f6a84-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 06:51:44 crc kubenswrapper[4520]: I0130 06:51:44.142059 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28a7e740-6b3e-49a1-ac09-f802137f6a84-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "28a7e740-6b3e-49a1-ac09-f802137f6a84" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:51:44 crc kubenswrapper[4520]: I0130 06:51:44.146743 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28a7e740-6b3e-49a1-ac09-f802137f6a84-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "28a7e740-6b3e-49a1-ac09-f802137f6a84" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:51:44 crc kubenswrapper[4520]: I0130 06:51:44.150164 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28a7e740-6b3e-49a1-ac09-f802137f6a84-kube-api-access-9rlsg" (OuterVolumeSpecName: "kube-api-access-9rlsg") pod "28a7e740-6b3e-49a1-ac09-f802137f6a84" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84"). InnerVolumeSpecName "kube-api-access-9rlsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:51:44 crc kubenswrapper[4520]: I0130 06:51:44.150974 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28a7e740-6b3e-49a1-ac09-f802137f6a84-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "28a7e740-6b3e-49a1-ac09-f802137f6a84" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:51:44 crc kubenswrapper[4520]: I0130 06:51:44.151110 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28a7e740-6b3e-49a1-ac09-f802137f6a84-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "28a7e740-6b3e-49a1-ac09-f802137f6a84" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:51:44 crc kubenswrapper[4520]: I0130 06:51:44.151237 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "28a7e740-6b3e-49a1-ac09-f802137f6a84" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 06:51:44 crc kubenswrapper[4520]: I0130 06:51:44.157234 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28a7e740-6b3e-49a1-ac09-f802137f6a84-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "28a7e740-6b3e-49a1-ac09-f802137f6a84" (UID: "28a7e740-6b3e-49a1-ac09-f802137f6a84"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:51:44 crc kubenswrapper[4520]: I0130 06:51:44.242106 4520 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/28a7e740-6b3e-49a1-ac09-f802137f6a84-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 06:51:44 crc kubenswrapper[4520]: I0130 06:51:44.242132 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rlsg\" (UniqueName: \"kubernetes.io/projected/28a7e740-6b3e-49a1-ac09-f802137f6a84-kube-api-access-9rlsg\") on node \"crc\" DevicePath \"\"" Jan 30 06:51:44 crc kubenswrapper[4520]: I0130 06:51:44.242144 4520 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/28a7e740-6b3e-49a1-ac09-f802137f6a84-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 06:51:44 crc kubenswrapper[4520]: I0130 06:51:44.242154 4520 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/28a7e740-6b3e-49a1-ac09-f802137f6a84-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 06:51:44 crc kubenswrapper[4520]: I0130 06:51:44.242164 4520 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/28a7e740-6b3e-49a1-ac09-f802137f6a84-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 06:51:44 crc kubenswrapper[4520]: I0130 06:51:44.242173 4520 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/28a7e740-6b3e-49a1-ac09-f802137f6a84-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 06:51:44 crc kubenswrapper[4520]: I0130 06:51:44.426276 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-54cnn"] Jan 30 06:51:44 crc kubenswrapper[4520]: I0130 06:51:44.430844 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-54cnn"] Jan 30 06:51:44 crc kubenswrapper[4520]: I0130 06:51:44.691138 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28a7e740-6b3e-49a1-ac09-f802137f6a84" path="/var/lib/kubelet/pods/28a7e740-6b3e-49a1-ac09-f802137f6a84/volumes" Jan 30 06:51:57 crc kubenswrapper[4520]: I0130 06:51:57.793771 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:51:57 crc kubenswrapper[4520]: I0130 06:51:57.794447 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:51:57 crc kubenswrapper[4520]: I0130 06:51:57.794540 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" Jan 30 06:51:57 crc kubenswrapper[4520]: I0130 06:51:57.795770 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"33eb4172918824c12d6f749038eb66206e75b7c9e4ce40339686339e4f47dc36"} pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 06:51:57 crc kubenswrapper[4520]: I0130 06:51:57.795940 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" containerID="cri-o://33eb4172918824c12d6f749038eb66206e75b7c9e4ce40339686339e4f47dc36" gracePeriod=600 Jan 30 06:51:58 crc kubenswrapper[4520]: I0130 06:51:58.167792 4520 generic.go:334] "Generic (PLEG): container finished" podID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerID="33eb4172918824c12d6f749038eb66206e75b7c9e4ce40339686339e4f47dc36" exitCode=0 Jan 30 06:51:58 crc kubenswrapper[4520]: I0130 06:51:58.167884 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerDied","Data":"33eb4172918824c12d6f749038eb66206e75b7c9e4ce40339686339e4f47dc36"} Jan 30 06:51:58 crc kubenswrapper[4520]: I0130 06:51:58.168154 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerStarted","Data":"262e0cf10792038e17c9535c842bb850c34802d1edf6585f98c352abd0f2a350"} Jan 30 06:51:58 crc kubenswrapper[4520]: I0130 06:51:58.168180 4520 scope.go:117] "RemoveContainer" containerID="bd69fadb06e7ce2c9a3d7618190a76de08974f58a46058a5e55250f74214ba26" Jan 30 06:52:49 crc kubenswrapper[4520]: I0130 06:52:49.572420 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-qw9bt"] Jan 30 06:52:49 crc kubenswrapper[4520]: E0130 06:52:49.573189 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28a7e740-6b3e-49a1-ac09-f802137f6a84" containerName="registry" Jan 30 06:52:49 crc kubenswrapper[4520]: I0130 06:52:49.573205 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="28a7e740-6b3e-49a1-ac09-f802137f6a84" containerName="registry" Jan 30 06:52:49 crc kubenswrapper[4520]: I0130 06:52:49.573323 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="28a7e740-6b3e-49a1-ac09-f802137f6a84" containerName="registry" Jan 30 06:52:49 crc kubenswrapper[4520]: I0130 06:52:49.573738 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-qw9bt" Jan 30 06:52:49 crc kubenswrapper[4520]: I0130 06:52:49.578373 4520 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-8pglh" Jan 30 06:52:49 crc kubenswrapper[4520]: I0130 06:52:49.578391 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 30 06:52:49 crc kubenswrapper[4520]: I0130 06:52:49.578589 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 30 06:52:49 crc kubenswrapper[4520]: I0130 06:52:49.581396 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-hsgng"] Jan 30 06:52:49 crc kubenswrapper[4520]: I0130 06:52:49.581847 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-hsgng" Jan 30 06:52:49 crc kubenswrapper[4520]: I0130 06:52:49.584929 4520 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-mrhsc" Jan 30 06:52:49 crc kubenswrapper[4520]: I0130 06:52:49.588062 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-qw9bt"] Jan 30 06:52:49 crc kubenswrapper[4520]: I0130 06:52:49.593436 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-4r4pj"] Jan 30 06:52:49 crc kubenswrapper[4520]: I0130 06:52:49.594193 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-4r4pj" Jan 30 06:52:49 crc kubenswrapper[4520]: I0130 06:52:49.596953 4520 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-4v94h" Jan 30 06:52:49 crc kubenswrapper[4520]: I0130 06:52:49.600097 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-hsgng"] Jan 30 06:52:49 crc kubenswrapper[4520]: I0130 06:52:49.610797 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-4r4pj"] Jan 30 06:52:49 crc kubenswrapper[4520]: I0130 06:52:49.771210 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs8zr\" (UniqueName: \"kubernetes.io/projected/928273b1-c655-46cb-860d-584378c92f40-kube-api-access-hs8zr\") pod \"cert-manager-webhook-687f57d79b-4r4pj\" (UID: \"928273b1-c655-46cb-860d-584378c92f40\") " pod="cert-manager/cert-manager-webhook-687f57d79b-4r4pj" Jan 30 06:52:49 crc kubenswrapper[4520]: I0130 06:52:49.771336 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h54t4\" (UniqueName: \"kubernetes.io/projected/c550813f-661d-4b33-9a3b-60186c554fbd-kube-api-access-h54t4\") pod \"cert-manager-858654f9db-qw9bt\" (UID: \"c550813f-661d-4b33-9a3b-60186c554fbd\") " pod="cert-manager/cert-manager-858654f9db-qw9bt" Jan 30 06:52:49 crc kubenswrapper[4520]: I0130 06:52:49.771424 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rg7g\" (UniqueName: \"kubernetes.io/projected/0768d977-a801-4127-92ac-5b9197ff478d-kube-api-access-9rg7g\") pod \"cert-manager-cainjector-cf98fcc89-hsgng\" (UID: \"0768d977-a801-4127-92ac-5b9197ff478d\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-hsgng" Jan 30 06:52:49 crc kubenswrapper[4520]: I0130 06:52:49.872498 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rg7g\" (UniqueName: \"kubernetes.io/projected/0768d977-a801-4127-92ac-5b9197ff478d-kube-api-access-9rg7g\") pod \"cert-manager-cainjector-cf98fcc89-hsgng\" (UID: \"0768d977-a801-4127-92ac-5b9197ff478d\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-hsgng" Jan 30 06:52:49 crc kubenswrapper[4520]: I0130 06:52:49.872605 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs8zr\" (UniqueName: \"kubernetes.io/projected/928273b1-c655-46cb-860d-584378c92f40-kube-api-access-hs8zr\") pod \"cert-manager-webhook-687f57d79b-4r4pj\" (UID: \"928273b1-c655-46cb-860d-584378c92f40\") " pod="cert-manager/cert-manager-webhook-687f57d79b-4r4pj" Jan 30 06:52:49 crc kubenswrapper[4520]: I0130 06:52:49.872656 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h54t4\" (UniqueName: \"kubernetes.io/projected/c550813f-661d-4b33-9a3b-60186c554fbd-kube-api-access-h54t4\") pod \"cert-manager-858654f9db-qw9bt\" (UID: \"c550813f-661d-4b33-9a3b-60186c554fbd\") " pod="cert-manager/cert-manager-858654f9db-qw9bt" Jan 30 06:52:49 crc kubenswrapper[4520]: I0130 06:52:49.890771 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h54t4\" (UniqueName: \"kubernetes.io/projected/c550813f-661d-4b33-9a3b-60186c554fbd-kube-api-access-h54t4\") pod \"cert-manager-858654f9db-qw9bt\" (UID: \"c550813f-661d-4b33-9a3b-60186c554fbd\") " pod="cert-manager/cert-manager-858654f9db-qw9bt" Jan 30 06:52:49 crc kubenswrapper[4520]: I0130 06:52:49.890980 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rg7g\" (UniqueName: \"kubernetes.io/projected/0768d977-a801-4127-92ac-5b9197ff478d-kube-api-access-9rg7g\") pod \"cert-manager-cainjector-cf98fcc89-hsgng\" (UID: \"0768d977-a801-4127-92ac-5b9197ff478d\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-hsgng" Jan 30 06:52:49 crc kubenswrapper[4520]: I0130 06:52:49.892022 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs8zr\" (UniqueName: \"kubernetes.io/projected/928273b1-c655-46cb-860d-584378c92f40-kube-api-access-hs8zr\") pod \"cert-manager-webhook-687f57d79b-4r4pj\" (UID: \"928273b1-c655-46cb-860d-584378c92f40\") " pod="cert-manager/cert-manager-webhook-687f57d79b-4r4pj" Jan 30 06:52:49 crc kubenswrapper[4520]: I0130 06:52:49.894639 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-hsgng" Jan 30 06:52:49 crc kubenswrapper[4520]: I0130 06:52:49.906063 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-4r4pj" Jan 30 06:52:50 crc kubenswrapper[4520]: I0130 06:52:50.106810 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-hsgng"] Jan 30 06:52:50 crc kubenswrapper[4520]: I0130 06:52:50.119821 4520 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 06:52:50 crc kubenswrapper[4520]: I0130 06:52:50.138731 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-4r4pj"] Jan 30 06:52:50 crc kubenswrapper[4520]: W0130 06:52:50.140955 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod928273b1_c655_46cb_860d_584378c92f40.slice/crio-22e7dcef72ee08c964cf349a3b2aed34aa618863dbc0b86fd6eadf367e112e1d WatchSource:0}: Error finding container 22e7dcef72ee08c964cf349a3b2aed34aa618863dbc0b86fd6eadf367e112e1d: Status 404 returned error can't find the container with id 22e7dcef72ee08c964cf349a3b2aed34aa618863dbc0b86fd6eadf367e112e1d Jan 30 06:52:50 crc kubenswrapper[4520]: I0130 06:52:50.188447 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-qw9bt" Jan 30 06:52:50 crc kubenswrapper[4520]: I0130 06:52:50.394146 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-qw9bt"] Jan 30 06:52:50 crc kubenswrapper[4520]: W0130 06:52:50.395712 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc550813f_661d_4b33_9a3b_60186c554fbd.slice/crio-c9ecfbcc156ec55593ae848958004b69835fa8dfc85c19caec71048092210945 WatchSource:0}: Error finding container c9ecfbcc156ec55593ae848958004b69835fa8dfc85c19caec71048092210945: Status 404 returned error can't find the container with id c9ecfbcc156ec55593ae848958004b69835fa8dfc85c19caec71048092210945 Jan 30 06:52:50 crc kubenswrapper[4520]: I0130 06:52:50.449474 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-4r4pj" event={"ID":"928273b1-c655-46cb-860d-584378c92f40","Type":"ContainerStarted","Data":"22e7dcef72ee08c964cf349a3b2aed34aa618863dbc0b86fd6eadf367e112e1d"} Jan 30 06:52:50 crc kubenswrapper[4520]: I0130 06:52:50.450788 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-qw9bt" event={"ID":"c550813f-661d-4b33-9a3b-60186c554fbd","Type":"ContainerStarted","Data":"c9ecfbcc156ec55593ae848958004b69835fa8dfc85c19caec71048092210945"} Jan 30 06:52:50 crc kubenswrapper[4520]: I0130 06:52:50.451938 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-hsgng" event={"ID":"0768d977-a801-4127-92ac-5b9197ff478d","Type":"ContainerStarted","Data":"ab55ecc690a8cfdbf810391a77f8082ab3d760a45fa9425dd99a0582763518cd"} Jan 30 06:52:53 crc kubenswrapper[4520]: I0130 06:52:53.476567 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-4r4pj" event={"ID":"928273b1-c655-46cb-860d-584378c92f40","Type":"ContainerStarted","Data":"c752ecfb3dc5066c32f162825bbb2acce7cee896688c23362aacc20d3a39659b"} Jan 30 06:52:53 crc kubenswrapper[4520]: I0130 06:52:53.476880 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-4r4pj" Jan 30 06:52:53 crc kubenswrapper[4520]: I0130 06:52:53.477849 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-hsgng" event={"ID":"0768d977-a801-4127-92ac-5b9197ff478d","Type":"ContainerStarted","Data":"bae11cdcf671fb04a15170d5150c3cdfc9c8aa0708375874da53a8e5c805b493"} Jan 30 06:52:53 crc kubenswrapper[4520]: I0130 06:52:53.491463 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-4r4pj" podStartSLOduration=2.002827305 podStartE2EDuration="4.491445218s" podCreationTimestamp="2026-01-30 06:52:49 +0000 UTC" firstStartedPulling="2026-01-30 06:52:50.144296188 +0000 UTC m=+483.772648369" lastFinishedPulling="2026-01-30 06:52:52.632914101 +0000 UTC m=+486.261266282" observedRunningTime="2026-01-30 06:52:53.490420051 +0000 UTC m=+487.118772232" watchObservedRunningTime="2026-01-30 06:52:53.491445218 +0000 UTC m=+487.119797398" Jan 30 06:52:53 crc kubenswrapper[4520]: I0130 06:52:53.519545 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-hsgng" podStartSLOduration=1.9754912839999998 podStartE2EDuration="4.519503776s" podCreationTimestamp="2026-01-30 06:52:49 +0000 UTC" firstStartedPulling="2026-01-30 06:52:50.119491293 +0000 UTC m=+483.747843475" lastFinishedPulling="2026-01-30 06:52:52.663503786 +0000 UTC m=+486.291855967" observedRunningTime="2026-01-30 06:52:53.506393659 +0000 UTC m=+487.134745840" watchObservedRunningTime="2026-01-30 06:52:53.519503776 +0000 UTC m=+487.147855957" Jan 30 06:52:54 crc kubenswrapper[4520]: I0130 06:52:54.484498 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-qw9bt" event={"ID":"c550813f-661d-4b33-9a3b-60186c554fbd","Type":"ContainerStarted","Data":"3fcb6ae1a6a8d15b8383fa7081d8ec90bd02e0cb45fc3c2660b90966d704f6d7"} Jan 30 06:52:54 crc kubenswrapper[4520]: I0130 06:52:54.496608 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-qw9bt" podStartSLOduration=2.465458283 podStartE2EDuration="5.496583983s" podCreationTimestamp="2026-01-30 06:52:49 +0000 UTC" firstStartedPulling="2026-01-30 06:52:50.397622305 +0000 UTC m=+484.025974487" lastFinishedPulling="2026-01-30 06:52:53.428748005 +0000 UTC m=+487.057100187" observedRunningTime="2026-01-30 06:52:54.495733174 +0000 UTC m=+488.124085356" watchObservedRunningTime="2026-01-30 06:52:54.496583983 +0000 UTC m=+488.124936164" Jan 30 06:52:59 crc kubenswrapper[4520]: I0130 06:52:59.910558 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-4r4pj" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.211380 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6tm5s"] Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.212173 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="ovn-controller" containerID="cri-o://498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97" gracePeriod=30 Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.212233 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="nbdb" containerID="cri-o://40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5" gracePeriod=30 Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.212294 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="northd" containerID="cri-o://bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f" gracePeriod=30 Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.212341 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157" gracePeriod=30 Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.212372 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="kube-rbac-proxy-node" containerID="cri-o://f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236" gracePeriod=30 Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.212402 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="ovn-acl-logging" containerID="cri-o://7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7" gracePeriod=30 Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.212742 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="sbdb" containerID="cri-o://7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a" gracePeriod=30 Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.265476 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="ovnkube-controller" containerID="cri-o://64d3e2184b58bf7bcb6224a1a435de5863b26e0398998735c1963be36e6651ae" gracePeriod=30 Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.488893 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tm5s_705f09bd-e1b6-47fd-83db-189fbe9a7b95/ovnkube-controller/3.log" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.491410 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tm5s_705f09bd-e1b6-47fd-83db-189fbe9a7b95/ovn-acl-logging/0.log" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.491991 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tm5s_705f09bd-e1b6-47fd-83db-189fbe9a7b95/ovn-controller/0.log" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.492485 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.539858 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-j7rbl"] Jan 30 06:53:19 crc kubenswrapper[4520]: E0130 06:53:19.540142 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.540163 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 06:53:19 crc kubenswrapper[4520]: E0130 06:53:19.540174 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="ovnkube-controller" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.540183 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="ovnkube-controller" Jan 30 06:53:19 crc kubenswrapper[4520]: E0130 06:53:19.540191 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="sbdb" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.540198 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="sbdb" Jan 30 06:53:19 crc kubenswrapper[4520]: E0130 06:53:19.540209 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="ovn-controller" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.540216 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="ovn-controller" Jan 30 06:53:19 crc kubenswrapper[4520]: E0130 06:53:19.540222 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="ovnkube-controller" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.540227 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="ovnkube-controller" Jan 30 06:53:19 crc kubenswrapper[4520]: E0130 06:53:19.540237 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="kube-rbac-proxy-node" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.540242 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="kube-rbac-proxy-node" Jan 30 06:53:19 crc kubenswrapper[4520]: E0130 06:53:19.540250 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="nbdb" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.540256 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="nbdb" Jan 30 06:53:19 crc kubenswrapper[4520]: E0130 06:53:19.540263 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="ovnkube-controller" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.540269 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="ovnkube-controller" Jan 30 06:53:19 crc kubenswrapper[4520]: E0130 06:53:19.540277 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="ovn-acl-logging" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.540283 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="ovn-acl-logging" Jan 30 06:53:19 crc kubenswrapper[4520]: E0130 06:53:19.540293 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="northd" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.540298 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="northd" Jan 30 06:53:19 crc kubenswrapper[4520]: E0130 06:53:19.540304 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="ovnkube-controller" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.540310 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="ovnkube-controller" Jan 30 06:53:19 crc kubenswrapper[4520]: E0130 06:53:19.540317 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="kubecfg-setup" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.540322 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="kubecfg-setup" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.540418 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="ovn-acl-logging" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.540428 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="kube-rbac-proxy-node" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.540436 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="ovnkube-controller" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.540443 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="ovnkube-controller" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.540449 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="ovn-controller" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.540456 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="sbdb" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.540466 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="ovnkube-controller" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.540472 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.540483 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="northd" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.540490 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="nbdb" Jan 30 06:53:19 crc kubenswrapper[4520]: E0130 06:53:19.540611 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="ovnkube-controller" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.540618 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="ovnkube-controller" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.540701 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="ovnkube-controller" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.540726 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerName="ovnkube-controller" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.542358 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.614825 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tm5s_705f09bd-e1b6-47fd-83db-189fbe9a7b95/ovnkube-controller/3.log" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.617580 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tm5s_705f09bd-e1b6-47fd-83db-189fbe9a7b95/ovn-acl-logging/0.log" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.618079 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tm5s_705f09bd-e1b6-47fd-83db-189fbe9a7b95/ovn-controller/0.log" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.618554 4520 generic.go:334] "Generic (PLEG): container finished" podID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerID="64d3e2184b58bf7bcb6224a1a435de5863b26e0398998735c1963be36e6651ae" exitCode=0 Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.618602 4520 generic.go:334] "Generic (PLEG): container finished" podID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerID="7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a" exitCode=0 Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.618613 4520 generic.go:334] "Generic (PLEG): container finished" podID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerID="40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5" exitCode=0 Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.618627 4520 generic.go:334] "Generic (PLEG): container finished" podID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerID="bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f" exitCode=0 Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.618636 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.618638 4520 generic.go:334] "Generic (PLEG): container finished" podID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerID="df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157" exitCode=0 Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.618739 4520 generic.go:334] "Generic (PLEG): container finished" podID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerID="f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236" exitCode=0 Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.618757 4520 generic.go:334] "Generic (PLEG): container finished" podID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerID="7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7" exitCode=143 Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.618777 4520 generic.go:334] "Generic (PLEG): container finished" podID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" containerID="498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97" exitCode=143 Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.618635 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" event={"ID":"705f09bd-e1b6-47fd-83db-189fbe9a7b95","Type":"ContainerDied","Data":"64d3e2184b58bf7bcb6224a1a435de5863b26e0398998735c1963be36e6651ae"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.618885 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" event={"ID":"705f09bd-e1b6-47fd-83db-189fbe9a7b95","Type":"ContainerDied","Data":"7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.618918 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" event={"ID":"705f09bd-e1b6-47fd-83db-189fbe9a7b95","Type":"ContainerDied","Data":"40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.618935 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" event={"ID":"705f09bd-e1b6-47fd-83db-189fbe9a7b95","Type":"ContainerDied","Data":"bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.618949 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" event={"ID":"705f09bd-e1b6-47fd-83db-189fbe9a7b95","Type":"ContainerDied","Data":"df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.618966 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" event={"ID":"705f09bd-e1b6-47fd-83db-189fbe9a7b95","Type":"ContainerDied","Data":"f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.618987 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619001 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619008 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619015 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619022 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619030 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619039 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619045 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619051 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619061 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" event={"ID":"705f09bd-e1b6-47fd-83db-189fbe9a7b95","Type":"ContainerDied","Data":"7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619072 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64d3e2184b58bf7bcb6224a1a435de5863b26e0398998735c1963be36e6651ae"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619080 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619086 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619093 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619098 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619105 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619111 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619116 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619123 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619129 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619139 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" event={"ID":"705f09bd-e1b6-47fd-83db-189fbe9a7b95","Type":"ContainerDied","Data":"498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619154 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64d3e2184b58bf7bcb6224a1a435de5863b26e0398998735c1963be36e6651ae"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619162 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619171 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619179 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619185 4520 scope.go:117] "RemoveContainer" containerID="64d3e2184b58bf7bcb6224a1a435de5863b26e0398998735c1963be36e6651ae" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619190 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619299 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619320 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619343 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619351 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619357 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619381 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tm5s" event={"ID":"705f09bd-e1b6-47fd-83db-189fbe9a7b95","Type":"ContainerDied","Data":"b4b099ea8e0891d3de244a88fda2e4e91bb5cb4c6c534b366fcf81c2e100acc7"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619424 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64d3e2184b58bf7bcb6224a1a435de5863b26e0398998735c1963be36e6651ae"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619432 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619439 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619447 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619453 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619460 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619465 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619470 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619475 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.619481 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.621328 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mn7g2_dfdf507d-4d3e-40ac-a9dc-c39c411f4c26/kube-multus/2.log" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.621918 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mn7g2_dfdf507d-4d3e-40ac-a9dc-c39c411f4c26/kube-multus/1.log" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.621973 4520 generic.go:334] "Generic (PLEG): container finished" podID="dfdf507d-4d3e-40ac-a9dc-c39c411f4c26" containerID="62c6675ec316ce30555a257a931998d24e9ffbaca75aed0464d002d9f6c3c7cf" exitCode=2 Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.622011 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mn7g2" event={"ID":"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26","Type":"ContainerDied","Data":"62c6675ec316ce30555a257a931998d24e9ffbaca75aed0464d002d9f6c3c7cf"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.622034 4520 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d835f1d19bf2442d881e665a0be837f0cd4e387cc45269e26a528de8b113de21"} Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.622591 4520 scope.go:117] "RemoveContainer" containerID="62c6675ec316ce30555a257a931998d24e9ffbaca75aed0464d002d9f6c3c7cf" Jan 30 06:53:19 crc kubenswrapper[4520]: E0130 06:53:19.622817 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-mn7g2_openshift-multus(dfdf507d-4d3e-40ac-a9dc-c39c411f4c26)\"" pod="openshift-multus/multus-mn7g2" podUID="dfdf507d-4d3e-40ac-a9dc-c39c411f4c26" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.630966 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-run-systemd\") pod \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.631007 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc94g\" (UniqueName: \"kubernetes.io/projected/705f09bd-e1b6-47fd-83db-189fbe9a7b95-kube-api-access-zc94g\") pod \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.631044 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-etc-openvswitch\") pod \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.631085 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-var-lib-cni-networks-ovn-kubernetes\") pod \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.631118 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-run-ovn-kubernetes\") pod \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.631149 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/705f09bd-e1b6-47fd-83db-189fbe9a7b95-env-overrides\") pod \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.631169 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-var-lib-openvswitch\") pod \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.631201 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/705f09bd-e1b6-47fd-83db-189fbe9a7b95-ovnkube-config\") pod \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.631220 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-systemd-units\") pod \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.631248 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-run-openvswitch\") pod \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.631272 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/705f09bd-e1b6-47fd-83db-189fbe9a7b95-ovnkube-script-lib\") pod \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.631289 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-log-socket\") pod \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.631313 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/705f09bd-e1b6-47fd-83db-189fbe9a7b95-ovn-node-metrics-cert\") pod \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.631330 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-kubelet\") pod \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.631348 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-run-netns\") pod \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.631377 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-cni-netd\") pod \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.631406 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-run-ovn\") pod \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.631422 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-node-log\") pod \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.631438 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-slash\") pod \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.631451 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-cni-bin\") pod \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\" (UID: \"705f09bd-e1b6-47fd-83db-189fbe9a7b95\") " Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.631639 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-host-run-netns\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.631665 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-run-ovn\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.631686 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.631717 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-host-slash\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.631745 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/171f1787-d3df-4513-8002-6aee04444d1a-ovnkube-config\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.631765 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/171f1787-d3df-4513-8002-6aee04444d1a-ovnkube-script-lib\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.631817 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-run-systemd\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.631854 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/171f1787-d3df-4513-8002-6aee04444d1a-env-overrides\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.631875 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/171f1787-d3df-4513-8002-6aee04444d1a-ovn-node-metrics-cert\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.631908 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr54m\" (UniqueName: \"kubernetes.io/projected/171f1787-d3df-4513-8002-6aee04444d1a-kube-api-access-kr54m\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.631928 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-host-cni-bin\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.631961 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-var-lib-openvswitch\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.631991 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-host-run-ovn-kubernetes\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.632264 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-host-cni-netd\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.632300 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-run-openvswitch\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.632338 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-host-kubelet\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.632382 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-node-log\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.632408 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-systemd-units\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.632426 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-log-socket\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.632442 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-etc-openvswitch\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.632508 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "705f09bd-e1b6-47fd-83db-189fbe9a7b95" (UID: "705f09bd-e1b6-47fd-83db-189fbe9a7b95"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.632555 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "705f09bd-e1b6-47fd-83db-189fbe9a7b95" (UID: "705f09bd-e1b6-47fd-83db-189fbe9a7b95"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.632576 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "705f09bd-e1b6-47fd-83db-189fbe9a7b95" (UID: "705f09bd-e1b6-47fd-83db-189fbe9a7b95"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.632768 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "705f09bd-e1b6-47fd-83db-189fbe9a7b95" (UID: "705f09bd-e1b6-47fd-83db-189fbe9a7b95"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.632862 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-log-socket" (OuterVolumeSpecName: "log-socket") pod "705f09bd-e1b6-47fd-83db-189fbe9a7b95" (UID: "705f09bd-e1b6-47fd-83db-189fbe9a7b95"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.633133 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/705f09bd-e1b6-47fd-83db-189fbe9a7b95-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "705f09bd-e1b6-47fd-83db-189fbe9a7b95" (UID: "705f09bd-e1b6-47fd-83db-189fbe9a7b95"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.633235 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/705f09bd-e1b6-47fd-83db-189fbe9a7b95-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "705f09bd-e1b6-47fd-83db-189fbe9a7b95" (UID: "705f09bd-e1b6-47fd-83db-189fbe9a7b95"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.633586 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/705f09bd-e1b6-47fd-83db-189fbe9a7b95-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "705f09bd-e1b6-47fd-83db-189fbe9a7b95" (UID: "705f09bd-e1b6-47fd-83db-189fbe9a7b95"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.633593 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "705f09bd-e1b6-47fd-83db-189fbe9a7b95" (UID: "705f09bd-e1b6-47fd-83db-189fbe9a7b95"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.633620 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "705f09bd-e1b6-47fd-83db-189fbe9a7b95" (UID: "705f09bd-e1b6-47fd-83db-189fbe9a7b95"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.633654 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "705f09bd-e1b6-47fd-83db-189fbe9a7b95" (UID: "705f09bd-e1b6-47fd-83db-189fbe9a7b95"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.633663 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-node-log" (OuterVolumeSpecName: "node-log") pod "705f09bd-e1b6-47fd-83db-189fbe9a7b95" (UID: "705f09bd-e1b6-47fd-83db-189fbe9a7b95"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.633690 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "705f09bd-e1b6-47fd-83db-189fbe9a7b95" (UID: "705f09bd-e1b6-47fd-83db-189fbe9a7b95"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.633695 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-slash" (OuterVolumeSpecName: "host-slash") pod "705f09bd-e1b6-47fd-83db-189fbe9a7b95" (UID: "705f09bd-e1b6-47fd-83db-189fbe9a7b95"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.633740 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "705f09bd-e1b6-47fd-83db-189fbe9a7b95" (UID: "705f09bd-e1b6-47fd-83db-189fbe9a7b95"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.633756 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "705f09bd-e1b6-47fd-83db-189fbe9a7b95" (UID: "705f09bd-e1b6-47fd-83db-189fbe9a7b95"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.634207 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "705f09bd-e1b6-47fd-83db-189fbe9a7b95" (UID: "705f09bd-e1b6-47fd-83db-189fbe9a7b95"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.634364 4520 scope.go:117] "RemoveContainer" containerID="6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.637583 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/705f09bd-e1b6-47fd-83db-189fbe9a7b95-kube-api-access-zc94g" (OuterVolumeSpecName: "kube-api-access-zc94g") pod "705f09bd-e1b6-47fd-83db-189fbe9a7b95" (UID: "705f09bd-e1b6-47fd-83db-189fbe9a7b95"). InnerVolumeSpecName "kube-api-access-zc94g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.639835 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/705f09bd-e1b6-47fd-83db-189fbe9a7b95-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "705f09bd-e1b6-47fd-83db-189fbe9a7b95" (UID: "705f09bd-e1b6-47fd-83db-189fbe9a7b95"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.651397 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "705f09bd-e1b6-47fd-83db-189fbe9a7b95" (UID: "705f09bd-e1b6-47fd-83db-189fbe9a7b95"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.659845 4520 scope.go:117] "RemoveContainer" containerID="7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.674551 4520 scope.go:117] "RemoveContainer" containerID="40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.688702 4520 scope.go:117] "RemoveContainer" containerID="bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.708253 4520 scope.go:117] "RemoveContainer" containerID="df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.721099 4520 scope.go:117] "RemoveContainer" containerID="f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.731538 4520 scope.go:117] "RemoveContainer" containerID="7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.733322 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-host-slash\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.733449 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/171f1787-d3df-4513-8002-6aee04444d1a-ovnkube-config\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.733570 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/171f1787-d3df-4513-8002-6aee04444d1a-ovnkube-script-lib\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.733676 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-run-systemd\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.733772 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/171f1787-d3df-4513-8002-6aee04444d1a-env-overrides\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.733865 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-run-systemd\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.733871 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/171f1787-d3df-4513-8002-6aee04444d1a-ovn-node-metrics-cert\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.733950 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr54m\" (UniqueName: \"kubernetes.io/projected/171f1787-d3df-4513-8002-6aee04444d1a-kube-api-access-kr54m\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.733455 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-host-slash\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.733979 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-host-cni-bin\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734012 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-var-lib-openvswitch\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734034 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-host-run-ovn-kubernetes\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734065 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-run-openvswitch\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734082 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-host-cni-netd\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734089 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-host-cni-bin\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734129 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-host-kubelet\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734108 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-host-kubelet\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734161 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-host-run-ovn-kubernetes\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734183 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-run-openvswitch\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734191 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-var-lib-openvswitch\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734205 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-host-cni-netd\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734272 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/171f1787-d3df-4513-8002-6aee04444d1a-ovnkube-script-lib\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734305 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-node-log\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734281 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-node-log\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734378 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-systemd-units\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734417 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-log-socket\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734439 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-etc-openvswitch\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734471 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-systemd-units\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734491 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-log-socket\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734498 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-host-run-netns\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734505 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-etc-openvswitch\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734472 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-host-run-netns\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734590 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-run-ovn\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734613 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734615 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/171f1787-d3df-4513-8002-6aee04444d1a-ovnkube-config\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734635 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734654 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/171f1787-d3df-4513-8002-6aee04444d1a-run-ovn\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734753 4520 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734769 4520 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/705f09bd-e1b6-47fd-83db-189fbe9a7b95-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734779 4520 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-log-socket\") on node \"crc\" DevicePath \"\"" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734787 4520 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/705f09bd-e1b6-47fd-83db-189fbe9a7b95-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734797 4520 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734805 4520 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734814 4520 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734825 4520 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734841 4520 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-slash\") on node \"crc\" DevicePath \"\"" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734849 4520 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-node-log\") on node \"crc\" DevicePath \"\"" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734856 4520 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734865 4520 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734873 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc94g\" (UniqueName: \"kubernetes.io/projected/705f09bd-e1b6-47fd-83db-189fbe9a7b95-kube-api-access-zc94g\") on node \"crc\" DevicePath \"\"" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734881 4520 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734889 4520 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734901 4520 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734909 4520 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/705f09bd-e1b6-47fd-83db-189fbe9a7b95-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734917 4520 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734925 4520 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/705f09bd-e1b6-47fd-83db-189fbe9a7b95-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.734933 4520 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/705f09bd-e1b6-47fd-83db-189fbe9a7b95-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.735005 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/171f1787-d3df-4513-8002-6aee04444d1a-env-overrides\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.737903 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/171f1787-d3df-4513-8002-6aee04444d1a-ovn-node-metrics-cert\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.744379 4520 scope.go:117] "RemoveContainer" containerID="498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.747786 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr54m\" (UniqueName: \"kubernetes.io/projected/171f1787-d3df-4513-8002-6aee04444d1a-kube-api-access-kr54m\") pod \"ovnkube-node-j7rbl\" (UID: \"171f1787-d3df-4513-8002-6aee04444d1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.755557 4520 scope.go:117] "RemoveContainer" containerID="56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.768086 4520 scope.go:117] "RemoveContainer" containerID="64d3e2184b58bf7bcb6224a1a435de5863b26e0398998735c1963be36e6651ae" Jan 30 06:53:19 crc kubenswrapper[4520]: E0130 06:53:19.768477 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64d3e2184b58bf7bcb6224a1a435de5863b26e0398998735c1963be36e6651ae\": container with ID starting with 64d3e2184b58bf7bcb6224a1a435de5863b26e0398998735c1963be36e6651ae not found: ID does not exist" containerID="64d3e2184b58bf7bcb6224a1a435de5863b26e0398998735c1963be36e6651ae" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.768653 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64d3e2184b58bf7bcb6224a1a435de5863b26e0398998735c1963be36e6651ae"} err="failed to get container status \"64d3e2184b58bf7bcb6224a1a435de5863b26e0398998735c1963be36e6651ae\": rpc error: code = NotFound desc = could not find container \"64d3e2184b58bf7bcb6224a1a435de5863b26e0398998735c1963be36e6651ae\": container with ID starting with 64d3e2184b58bf7bcb6224a1a435de5863b26e0398998735c1963be36e6651ae not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.768683 4520 scope.go:117] "RemoveContainer" containerID="6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e" Jan 30 06:53:19 crc kubenswrapper[4520]: E0130 06:53:19.769033 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e\": container with ID starting with 6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e not found: ID does not exist" containerID="6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.769068 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e"} err="failed to get container status \"6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e\": rpc error: code = NotFound desc = could not find container \"6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e\": container with ID starting with 6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.769104 4520 scope.go:117] "RemoveContainer" containerID="7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a" Jan 30 06:53:19 crc kubenswrapper[4520]: E0130 06:53:19.769398 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a\": container with ID starting with 7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a not found: ID does not exist" containerID="7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.769439 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a"} err="failed to get container status \"7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a\": rpc error: code = NotFound desc = could not find container \"7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a\": container with ID starting with 7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.769466 4520 scope.go:117] "RemoveContainer" containerID="40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5" Jan 30 06:53:19 crc kubenswrapper[4520]: E0130 06:53:19.769915 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5\": container with ID starting with 40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5 not found: ID does not exist" containerID="40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.769943 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5"} err="failed to get container status \"40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5\": rpc error: code = NotFound desc = could not find container \"40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5\": container with ID starting with 40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5 not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.769958 4520 scope.go:117] "RemoveContainer" containerID="bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f" Jan 30 06:53:19 crc kubenswrapper[4520]: E0130 06:53:19.770301 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f\": container with ID starting with bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f not found: ID does not exist" containerID="bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.770336 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f"} err="failed to get container status \"bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f\": rpc error: code = NotFound desc = could not find container \"bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f\": container with ID starting with bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.770356 4520 scope.go:117] "RemoveContainer" containerID="df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157" Jan 30 06:53:19 crc kubenswrapper[4520]: E0130 06:53:19.770596 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157\": container with ID starting with df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157 not found: ID does not exist" containerID="df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.770620 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157"} err="failed to get container status \"df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157\": rpc error: code = NotFound desc = could not find container \"df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157\": container with ID starting with df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157 not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.770634 4520 scope.go:117] "RemoveContainer" containerID="f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236" Jan 30 06:53:19 crc kubenswrapper[4520]: E0130 06:53:19.770888 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236\": container with ID starting with f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236 not found: ID does not exist" containerID="f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.770915 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236"} err="failed to get container status \"f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236\": rpc error: code = NotFound desc = could not find container \"f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236\": container with ID starting with f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236 not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.770931 4520 scope.go:117] "RemoveContainer" containerID="7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7" Jan 30 06:53:19 crc kubenswrapper[4520]: E0130 06:53:19.771154 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7\": container with ID starting with 7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7 not found: ID does not exist" containerID="7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.771175 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7"} err="failed to get container status \"7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7\": rpc error: code = NotFound desc = could not find container \"7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7\": container with ID starting with 7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7 not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.771189 4520 scope.go:117] "RemoveContainer" containerID="498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97" Jan 30 06:53:19 crc kubenswrapper[4520]: E0130 06:53:19.771362 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97\": container with ID starting with 498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97 not found: ID does not exist" containerID="498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.771385 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97"} err="failed to get container status \"498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97\": rpc error: code = NotFound desc = could not find container \"498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97\": container with ID starting with 498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97 not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.771403 4520 scope.go:117] "RemoveContainer" containerID="56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154" Jan 30 06:53:19 crc kubenswrapper[4520]: E0130 06:53:19.771751 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\": container with ID starting with 56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154 not found: ID does not exist" containerID="56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.771771 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154"} err="failed to get container status \"56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\": rpc error: code = NotFound desc = could not find container \"56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\": container with ID starting with 56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154 not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.771788 4520 scope.go:117] "RemoveContainer" containerID="64d3e2184b58bf7bcb6224a1a435de5863b26e0398998735c1963be36e6651ae" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.772075 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64d3e2184b58bf7bcb6224a1a435de5863b26e0398998735c1963be36e6651ae"} err="failed to get container status \"64d3e2184b58bf7bcb6224a1a435de5863b26e0398998735c1963be36e6651ae\": rpc error: code = NotFound desc = could not find container \"64d3e2184b58bf7bcb6224a1a435de5863b26e0398998735c1963be36e6651ae\": container with ID starting with 64d3e2184b58bf7bcb6224a1a435de5863b26e0398998735c1963be36e6651ae not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.772098 4520 scope.go:117] "RemoveContainer" containerID="6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.772296 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e"} err="failed to get container status \"6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e\": rpc error: code = NotFound desc = could not find container \"6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e\": container with ID starting with 6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.772315 4520 scope.go:117] "RemoveContainer" containerID="7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.772534 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a"} err="failed to get container status \"7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a\": rpc error: code = NotFound desc = could not find container \"7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a\": container with ID starting with 7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.772555 4520 scope.go:117] "RemoveContainer" containerID="40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.772774 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5"} err="failed to get container status \"40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5\": rpc error: code = NotFound desc = could not find container \"40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5\": container with ID starting with 40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5 not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.772860 4520 scope.go:117] "RemoveContainer" containerID="bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.773129 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f"} err="failed to get container status \"bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f\": rpc error: code = NotFound desc = could not find container \"bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f\": container with ID starting with bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.773152 4520 scope.go:117] "RemoveContainer" containerID="df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.773333 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157"} err="failed to get container status \"df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157\": rpc error: code = NotFound desc = could not find container \"df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157\": container with ID starting with df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157 not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.773356 4520 scope.go:117] "RemoveContainer" containerID="f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.773595 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236"} err="failed to get container status \"f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236\": rpc error: code = NotFound desc = could not find container \"f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236\": container with ID starting with f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236 not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.773617 4520 scope.go:117] "RemoveContainer" containerID="7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.773792 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7"} err="failed to get container status \"7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7\": rpc error: code = NotFound desc = could not find container \"7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7\": container with ID starting with 7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7 not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.773813 4520 scope.go:117] "RemoveContainer" containerID="498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.774026 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97"} err="failed to get container status \"498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97\": rpc error: code = NotFound desc = could not find container \"498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97\": container with ID starting with 498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97 not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.774046 4520 scope.go:117] "RemoveContainer" containerID="56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.774236 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154"} err="failed to get container status \"56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\": rpc error: code = NotFound desc = could not find container \"56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\": container with ID starting with 56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154 not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.774262 4520 scope.go:117] "RemoveContainer" containerID="64d3e2184b58bf7bcb6224a1a435de5863b26e0398998735c1963be36e6651ae" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.774485 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64d3e2184b58bf7bcb6224a1a435de5863b26e0398998735c1963be36e6651ae"} err="failed to get container status \"64d3e2184b58bf7bcb6224a1a435de5863b26e0398998735c1963be36e6651ae\": rpc error: code = NotFound desc = could not find container \"64d3e2184b58bf7bcb6224a1a435de5863b26e0398998735c1963be36e6651ae\": container with ID starting with 64d3e2184b58bf7bcb6224a1a435de5863b26e0398998735c1963be36e6651ae not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.774505 4520 scope.go:117] "RemoveContainer" containerID="6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.774767 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e"} err="failed to get container status \"6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e\": rpc error: code = NotFound desc = could not find container \"6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e\": container with ID starting with 6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.774791 4520 scope.go:117] "RemoveContainer" containerID="7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.775031 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a"} err="failed to get container status \"7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a\": rpc error: code = NotFound desc = could not find container \"7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a\": container with ID starting with 7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.775051 4520 scope.go:117] "RemoveContainer" containerID="40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.775280 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5"} err="failed to get container status \"40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5\": rpc error: code = NotFound desc = could not find container \"40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5\": container with ID starting with 40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5 not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.775348 4520 scope.go:117] "RemoveContainer" containerID="bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.775702 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f"} err="failed to get container status \"bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f\": rpc error: code = NotFound desc = could not find container \"bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f\": container with ID starting with bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.775767 4520 scope.go:117] "RemoveContainer" containerID="df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.776239 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157"} err="failed to get container status \"df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157\": rpc error: code = NotFound desc = could not find container \"df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157\": container with ID starting with df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157 not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.776262 4520 scope.go:117] "RemoveContainer" containerID="f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.776575 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236"} err="failed to get container status \"f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236\": rpc error: code = NotFound desc = could not find container \"f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236\": container with ID starting with f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236 not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.776601 4520 scope.go:117] "RemoveContainer" containerID="7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.776842 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7"} err="failed to get container status \"7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7\": rpc error: code = NotFound desc = could not find container \"7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7\": container with ID starting with 7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7 not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.776917 4520 scope.go:117] "RemoveContainer" containerID="498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.777194 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97"} err="failed to get container status \"498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97\": rpc error: code = NotFound desc = could not find container \"498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97\": container with ID starting with 498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97 not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.777215 4520 scope.go:117] "RemoveContainer" containerID="56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.777441 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154"} err="failed to get container status \"56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\": rpc error: code = NotFound desc = could not find container \"56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\": container with ID starting with 56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154 not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.777462 4520 scope.go:117] "RemoveContainer" containerID="64d3e2184b58bf7bcb6224a1a435de5863b26e0398998735c1963be36e6651ae" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.777680 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64d3e2184b58bf7bcb6224a1a435de5863b26e0398998735c1963be36e6651ae"} err="failed to get container status \"64d3e2184b58bf7bcb6224a1a435de5863b26e0398998735c1963be36e6651ae\": rpc error: code = NotFound desc = could not find container \"64d3e2184b58bf7bcb6224a1a435de5863b26e0398998735c1963be36e6651ae\": container with ID starting with 64d3e2184b58bf7bcb6224a1a435de5863b26e0398998735c1963be36e6651ae not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.777701 4520 scope.go:117] "RemoveContainer" containerID="6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.777904 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e"} err="failed to get container status \"6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e\": rpc error: code = NotFound desc = could not find container \"6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e\": container with ID starting with 6679d9450a5774c0a7e8c5abc3c0b9f9bcbc2fd321a8862e606a18a83a6f902e not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.777960 4520 scope.go:117] "RemoveContainer" containerID="7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.780026 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a"} err="failed to get container status \"7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a\": rpc error: code = NotFound desc = could not find container \"7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a\": container with ID starting with 7fca89c7f6f399aa31866d2c8756dfa0d2a4c3604ca2de637f266e4efa0c603a not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.780047 4520 scope.go:117] "RemoveContainer" containerID="40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.780593 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5"} err="failed to get container status \"40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5\": rpc error: code = NotFound desc = could not find container \"40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5\": container with ID starting with 40075cde3aa4a9a9d6e83ba31c4017fe2c0c7a5bc193854b1ecf41fa4eea8cd5 not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.780637 4520 scope.go:117] "RemoveContainer" containerID="bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.781159 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f"} err="failed to get container status \"bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f\": rpc error: code = NotFound desc = could not find container \"bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f\": container with ID starting with bbab8efd3f95cec20f9c8c09bd6e99542890f56d9e80d724adc872a5c10a0b6f not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.781219 4520 scope.go:117] "RemoveContainer" containerID="df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.781479 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157"} err="failed to get container status \"df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157\": rpc error: code = NotFound desc = could not find container \"df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157\": container with ID starting with df9988c8a8cecbc9536505ced65a0d2d37c78dc1fcd5ad8c4638e470c8a3a157 not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.781506 4520 scope.go:117] "RemoveContainer" containerID="f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.781769 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236"} err="failed to get container status \"f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236\": rpc error: code = NotFound desc = could not find container \"f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236\": container with ID starting with f8e7fb796a0a3212e75fadae735aa9b3cd6a3e28a57dba636eaddf45c41ae236 not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.781854 4520 scope.go:117] "RemoveContainer" containerID="7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.782172 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7"} err="failed to get container status \"7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7\": rpc error: code = NotFound desc = could not find container \"7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7\": container with ID starting with 7942289c1944b8f9296c81cd27bf3abc07887bf98e98014471b1c5ad91910dd7 not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.782248 4520 scope.go:117] "RemoveContainer" containerID="498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.782567 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97"} err="failed to get container status \"498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97\": rpc error: code = NotFound desc = could not find container \"498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97\": container with ID starting with 498b41f35c1240313cebcaa535d4309cd24b578216fc574a817a3769b35ceb97 not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.782589 4520 scope.go:117] "RemoveContainer" containerID="56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.782914 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154"} err="failed to get container status \"56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\": rpc error: code = NotFound desc = could not find container \"56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154\": container with ID starting with 56b2a818169fcfe069ebed46e6b3809e2147b763a3c5c2bc5801cca240b59154 not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.782936 4520 scope.go:117] "RemoveContainer" containerID="64d3e2184b58bf7bcb6224a1a435de5863b26e0398998735c1963be36e6651ae" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.783165 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64d3e2184b58bf7bcb6224a1a435de5863b26e0398998735c1963be36e6651ae"} err="failed to get container status \"64d3e2184b58bf7bcb6224a1a435de5863b26e0398998735c1963be36e6651ae\": rpc error: code = NotFound desc = could not find container \"64d3e2184b58bf7bcb6224a1a435de5863b26e0398998735c1963be36e6651ae\": container with ID starting with 64d3e2184b58bf7bcb6224a1a435de5863b26e0398998735c1963be36e6651ae not found: ID does not exist" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.856392 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.964975 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6tm5s"] Jan 30 06:53:19 crc kubenswrapper[4520]: I0130 06:53:19.970460 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6tm5s"] Jan 30 06:53:20 crc kubenswrapper[4520]: I0130 06:53:20.631013 4520 generic.go:334] "Generic (PLEG): container finished" podID="171f1787-d3df-4513-8002-6aee04444d1a" containerID="f77d8e01293da69892a47d951b831903b920f81f8414c1de491d928b256c48e0" exitCode=0 Jan 30 06:53:20 crc kubenswrapper[4520]: I0130 06:53:20.631085 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" event={"ID":"171f1787-d3df-4513-8002-6aee04444d1a","Type":"ContainerDied","Data":"f77d8e01293da69892a47d951b831903b920f81f8414c1de491d928b256c48e0"} Jan 30 06:53:20 crc kubenswrapper[4520]: I0130 06:53:20.631325 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" event={"ID":"171f1787-d3df-4513-8002-6aee04444d1a","Type":"ContainerStarted","Data":"219b936411c2dadc4c46b75d5259aa773fde3350c1297e428a75ae9f25f4c063"} Jan 30 06:53:20 crc kubenswrapper[4520]: I0130 06:53:20.694392 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="705f09bd-e1b6-47fd-83db-189fbe9a7b95" path="/var/lib/kubelet/pods/705f09bd-e1b6-47fd-83db-189fbe9a7b95/volumes" Jan 30 06:53:21 crc kubenswrapper[4520]: I0130 06:53:21.641061 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" event={"ID":"171f1787-d3df-4513-8002-6aee04444d1a","Type":"ContainerStarted","Data":"f5609a1f75ee5d898c47d3420335f6475f99129730e143d033337f744cce5edd"} Jan 30 06:53:21 crc kubenswrapper[4520]: I0130 06:53:21.641428 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" event={"ID":"171f1787-d3df-4513-8002-6aee04444d1a","Type":"ContainerStarted","Data":"aaa1ff9db4f81dfff5ab8a70e55608c49c1621de4632c45a9e0196caa56b5bf4"} Jan 30 06:53:21 crc kubenswrapper[4520]: I0130 06:53:21.641445 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" event={"ID":"171f1787-d3df-4513-8002-6aee04444d1a","Type":"ContainerStarted","Data":"bcc7613385ad9db2309fa8888f5d69a1674e98d018e700b54629ce1eaff33ca1"} Jan 30 06:53:21 crc kubenswrapper[4520]: I0130 06:53:21.641456 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" event={"ID":"171f1787-d3df-4513-8002-6aee04444d1a","Type":"ContainerStarted","Data":"878a02380c12618dda6f9c257b99bf48d77d451ee6aee1a5f9d7522053fb1e99"} Jan 30 06:53:21 crc kubenswrapper[4520]: I0130 06:53:21.641467 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" event={"ID":"171f1787-d3df-4513-8002-6aee04444d1a","Type":"ContainerStarted","Data":"13f7c85324b9fa1aaf5c43eaae68dd2fb117713f1657b7ff9d968d010991e27d"} Jan 30 06:53:21 crc kubenswrapper[4520]: I0130 06:53:21.641476 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" event={"ID":"171f1787-d3df-4513-8002-6aee04444d1a","Type":"ContainerStarted","Data":"b852e3b6258f11d23d08755db8a0008c81fea440b58c88a5adc2b6bd85835d8c"} Jan 30 06:53:23 crc kubenswrapper[4520]: I0130 06:53:23.662002 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" event={"ID":"171f1787-d3df-4513-8002-6aee04444d1a","Type":"ContainerStarted","Data":"64dabc2888c8d335a7e685b7c1e0c2b9efb3eb815f244e192c5a0f4032de27f8"} Jan 30 06:53:25 crc kubenswrapper[4520]: I0130 06:53:25.679882 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" event={"ID":"171f1787-d3df-4513-8002-6aee04444d1a","Type":"ContainerStarted","Data":"d5b1492bead7ce78bbd7d667d28ece9a433998966453620a5e577b3f835b89bf"} Jan 30 06:53:25 crc kubenswrapper[4520]: I0130 06:53:25.680333 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:25 crc kubenswrapper[4520]: I0130 06:53:25.680345 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:25 crc kubenswrapper[4520]: I0130 06:53:25.713026 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:25 crc kubenswrapper[4520]: I0130 06:53:25.716413 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" podStartSLOduration=6.716394412 podStartE2EDuration="6.716394412s" podCreationTimestamp="2026-01-30 06:53:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:53:25.711480627 +0000 UTC m=+519.339832808" watchObservedRunningTime="2026-01-30 06:53:25.716394412 +0000 UTC m=+519.344746593" Jan 30 06:53:26 crc kubenswrapper[4520]: I0130 06:53:26.690158 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:26 crc kubenswrapper[4520]: I0130 06:53:26.716485 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:31 crc kubenswrapper[4520]: I0130 06:53:31.113782 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl"] Jan 30 06:53:31 crc kubenswrapper[4520]: I0130 06:53:31.114790 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl" Jan 30 06:53:31 crc kubenswrapper[4520]: I0130 06:53:31.117481 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 06:53:31 crc kubenswrapper[4520]: I0130 06:53:31.121125 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl"] Jan 30 06:53:31 crc kubenswrapper[4520]: I0130 06:53:31.189899 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f2388674-cc7f-4927-9bf8-2157f12a66f4-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl\" (UID: \"f2388674-cc7f-4927-9bf8-2157f12a66f4\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl" Jan 30 06:53:31 crc kubenswrapper[4520]: I0130 06:53:31.189987 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f2388674-cc7f-4927-9bf8-2157f12a66f4-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl\" (UID: \"f2388674-cc7f-4927-9bf8-2157f12a66f4\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl" Jan 30 06:53:31 crc kubenswrapper[4520]: I0130 06:53:31.190017 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnchw\" (UniqueName: \"kubernetes.io/projected/f2388674-cc7f-4927-9bf8-2157f12a66f4-kube-api-access-cnchw\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl\" (UID: \"f2388674-cc7f-4927-9bf8-2157f12a66f4\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl" Jan 30 06:53:31 crc kubenswrapper[4520]: I0130 06:53:31.291311 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f2388674-cc7f-4927-9bf8-2157f12a66f4-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl\" (UID: \"f2388674-cc7f-4927-9bf8-2157f12a66f4\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl" Jan 30 06:53:31 crc kubenswrapper[4520]: I0130 06:53:31.291374 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f2388674-cc7f-4927-9bf8-2157f12a66f4-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl\" (UID: \"f2388674-cc7f-4927-9bf8-2157f12a66f4\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl" Jan 30 06:53:31 crc kubenswrapper[4520]: I0130 06:53:31.291407 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnchw\" (UniqueName: \"kubernetes.io/projected/f2388674-cc7f-4927-9bf8-2157f12a66f4-kube-api-access-cnchw\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl\" (UID: \"f2388674-cc7f-4927-9bf8-2157f12a66f4\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl" Jan 30 06:53:31 crc kubenswrapper[4520]: I0130 06:53:31.291828 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f2388674-cc7f-4927-9bf8-2157f12a66f4-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl\" (UID: \"f2388674-cc7f-4927-9bf8-2157f12a66f4\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl" Jan 30 06:53:31 crc kubenswrapper[4520]: I0130 06:53:31.291929 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f2388674-cc7f-4927-9bf8-2157f12a66f4-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl\" (UID: \"f2388674-cc7f-4927-9bf8-2157f12a66f4\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl" Jan 30 06:53:31 crc kubenswrapper[4520]: I0130 06:53:31.311383 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnchw\" (UniqueName: \"kubernetes.io/projected/f2388674-cc7f-4927-9bf8-2157f12a66f4-kube-api-access-cnchw\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl\" (UID: \"f2388674-cc7f-4927-9bf8-2157f12a66f4\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl" Jan 30 06:53:31 crc kubenswrapper[4520]: I0130 06:53:31.428732 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl" Jan 30 06:53:31 crc kubenswrapper[4520]: E0130 06:53:31.453135 4520 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl_openshift-marketplace_f2388674-cc7f-4927-9bf8-2157f12a66f4_0(a44e7c47b8dc00e3f999acb81acff171120ee71ee60062a10e4824af9ba06832): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 06:53:31 crc kubenswrapper[4520]: E0130 06:53:31.453217 4520 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl_openshift-marketplace_f2388674-cc7f-4927-9bf8-2157f12a66f4_0(a44e7c47b8dc00e3f999acb81acff171120ee71ee60062a10e4824af9ba06832): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl" Jan 30 06:53:31 crc kubenswrapper[4520]: E0130 06:53:31.453244 4520 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl_openshift-marketplace_f2388674-cc7f-4927-9bf8-2157f12a66f4_0(a44e7c47b8dc00e3f999acb81acff171120ee71ee60062a10e4824af9ba06832): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl" Jan 30 06:53:31 crc kubenswrapper[4520]: E0130 06:53:31.453296 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl_openshift-marketplace(f2388674-cc7f-4927-9bf8-2157f12a66f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl_openshift-marketplace(f2388674-cc7f-4927-9bf8-2157f12a66f4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl_openshift-marketplace_f2388674-cc7f-4927-9bf8-2157f12a66f4_0(a44e7c47b8dc00e3f999acb81acff171120ee71ee60062a10e4824af9ba06832): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl" podUID="f2388674-cc7f-4927-9bf8-2157f12a66f4" Jan 30 06:53:31 crc kubenswrapper[4520]: I0130 06:53:31.722574 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl" Jan 30 06:53:31 crc kubenswrapper[4520]: I0130 06:53:31.723130 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl" Jan 30 06:53:31 crc kubenswrapper[4520]: E0130 06:53:31.747050 4520 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl_openshift-marketplace_f2388674-cc7f-4927-9bf8-2157f12a66f4_0(9735e0819e7464ea09367987ce22627538f7b0c10beffbce5ffad64a1a9f1f4f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 06:53:31 crc kubenswrapper[4520]: E0130 06:53:31.747137 4520 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl_openshift-marketplace_f2388674-cc7f-4927-9bf8-2157f12a66f4_0(9735e0819e7464ea09367987ce22627538f7b0c10beffbce5ffad64a1a9f1f4f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl" Jan 30 06:53:31 crc kubenswrapper[4520]: E0130 06:53:31.747167 4520 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl_openshift-marketplace_f2388674-cc7f-4927-9bf8-2157f12a66f4_0(9735e0819e7464ea09367987ce22627538f7b0c10beffbce5ffad64a1a9f1f4f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl" Jan 30 06:53:31 crc kubenswrapper[4520]: E0130 06:53:31.747244 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl_openshift-marketplace(f2388674-cc7f-4927-9bf8-2157f12a66f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl_openshift-marketplace(f2388674-cc7f-4927-9bf8-2157f12a66f4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl_openshift-marketplace_f2388674-cc7f-4927-9bf8-2157f12a66f4_0(9735e0819e7464ea09367987ce22627538f7b0c10beffbce5ffad64a1a9f1f4f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl" podUID="f2388674-cc7f-4927-9bf8-2157f12a66f4" Jan 30 06:53:32 crc kubenswrapper[4520]: I0130 06:53:32.685578 4520 scope.go:117] "RemoveContainer" containerID="62c6675ec316ce30555a257a931998d24e9ffbaca75aed0464d002d9f6c3c7cf" Jan 30 06:53:32 crc kubenswrapper[4520]: E0130 06:53:32.686744 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-mn7g2_openshift-multus(dfdf507d-4d3e-40ac-a9dc-c39c411f4c26)\"" pod="openshift-multus/multus-mn7g2" podUID="dfdf507d-4d3e-40ac-a9dc-c39c411f4c26" Jan 30 06:53:46 crc kubenswrapper[4520]: I0130 06:53:46.685364 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl" Jan 30 06:53:46 crc kubenswrapper[4520]: I0130 06:53:46.688066 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl" Jan 30 06:53:46 crc kubenswrapper[4520]: E0130 06:53:46.710873 4520 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl_openshift-marketplace_f2388674-cc7f-4927-9bf8-2157f12a66f4_0(7c98a5982e250de6f561ae6fcedc18f4a0e529e0197ccca7574fe7ddb9af2a78): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 06:53:46 crc kubenswrapper[4520]: E0130 06:53:46.710945 4520 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl_openshift-marketplace_f2388674-cc7f-4927-9bf8-2157f12a66f4_0(7c98a5982e250de6f561ae6fcedc18f4a0e529e0197ccca7574fe7ddb9af2a78): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl" Jan 30 06:53:46 crc kubenswrapper[4520]: E0130 06:53:46.710967 4520 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl_openshift-marketplace_f2388674-cc7f-4927-9bf8-2157f12a66f4_0(7c98a5982e250de6f561ae6fcedc18f4a0e529e0197ccca7574fe7ddb9af2a78): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl" Jan 30 06:53:46 crc kubenswrapper[4520]: E0130 06:53:46.711014 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl_openshift-marketplace(f2388674-cc7f-4927-9bf8-2157f12a66f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl_openshift-marketplace(f2388674-cc7f-4927-9bf8-2157f12a66f4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl_openshift-marketplace_f2388674-cc7f-4927-9bf8-2157f12a66f4_0(7c98a5982e250de6f561ae6fcedc18f4a0e529e0197ccca7574fe7ddb9af2a78): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl" podUID="f2388674-cc7f-4927-9bf8-2157f12a66f4" Jan 30 06:53:46 crc kubenswrapper[4520]: I0130 06:53:46.942904 4520 scope.go:117] "RemoveContainer" containerID="d835f1d19bf2442d881e665a0be837f0cd4e387cc45269e26a528de8b113de21" Jan 30 06:53:47 crc kubenswrapper[4520]: I0130 06:53:47.686333 4520 scope.go:117] "RemoveContainer" containerID="62c6675ec316ce30555a257a931998d24e9ffbaca75aed0464d002d9f6c3c7cf" Jan 30 06:53:47 crc kubenswrapper[4520]: I0130 06:53:47.808747 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mn7g2_dfdf507d-4d3e-40ac-a9dc-c39c411f4c26/kube-multus/2.log" Jan 30 06:53:47 crc kubenswrapper[4520]: I0130 06:53:47.808845 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mn7g2" event={"ID":"dfdf507d-4d3e-40ac-a9dc-c39c411f4c26","Type":"ContainerStarted","Data":"2abd7f83868c135d6dfd7a2bc0b1f62791497dc06773e86bbf5878397ac7639f"} Jan 30 06:53:49 crc kubenswrapper[4520]: I0130 06:53:49.872566 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-j7rbl" Jan 30 06:53:57 crc kubenswrapper[4520]: I0130 06:53:57.685684 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl" Jan 30 06:53:57 crc kubenswrapper[4520]: I0130 06:53:57.686257 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl" Jan 30 06:53:58 crc kubenswrapper[4520]: I0130 06:53:58.030187 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl"] Jan 30 06:53:58 crc kubenswrapper[4520]: I0130 06:53:58.855823 4520 generic.go:334] "Generic (PLEG): container finished" podID="f2388674-cc7f-4927-9bf8-2157f12a66f4" containerID="920da7085e9762d5554c40db6c8ccd87273fbff0bd7874c7c5c273c278ce555b" exitCode=0 Jan 30 06:53:58 crc kubenswrapper[4520]: I0130 06:53:58.855873 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl" event={"ID":"f2388674-cc7f-4927-9bf8-2157f12a66f4","Type":"ContainerDied","Data":"920da7085e9762d5554c40db6c8ccd87273fbff0bd7874c7c5c273c278ce555b"} Jan 30 06:53:58 crc kubenswrapper[4520]: I0130 06:53:58.855898 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl" event={"ID":"f2388674-cc7f-4927-9bf8-2157f12a66f4","Type":"ContainerStarted","Data":"18c9d21693523f1c6e4ae7186d949aa209c39863364dfe1ea9a13fdf052a4626"} Jan 30 06:54:00 crc kubenswrapper[4520]: I0130 06:54:00.866952 4520 generic.go:334] "Generic (PLEG): container finished" podID="f2388674-cc7f-4927-9bf8-2157f12a66f4" containerID="d570db3610945bd098b6582183ce5ec6dc6963c54f12c074199269d6d2641176" exitCode=0 Jan 30 06:54:00 crc kubenswrapper[4520]: I0130 06:54:00.866994 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl" event={"ID":"f2388674-cc7f-4927-9bf8-2157f12a66f4","Type":"ContainerDied","Data":"d570db3610945bd098b6582183ce5ec6dc6963c54f12c074199269d6d2641176"} Jan 30 06:54:01 crc kubenswrapper[4520]: I0130 06:54:01.874123 4520 generic.go:334] "Generic (PLEG): container finished" podID="f2388674-cc7f-4927-9bf8-2157f12a66f4" containerID="f6c0659c921b73580cce0fa44988144fac693c53e00d7da875a2f149a103bd07" exitCode=0 Jan 30 06:54:01 crc kubenswrapper[4520]: I0130 06:54:01.874169 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl" event={"ID":"f2388674-cc7f-4927-9bf8-2157f12a66f4","Type":"ContainerDied","Data":"f6c0659c921b73580cce0fa44988144fac693c53e00d7da875a2f149a103bd07"} Jan 30 06:54:03 crc kubenswrapper[4520]: I0130 06:54:03.055796 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl" Jan 30 06:54:03 crc kubenswrapper[4520]: I0130 06:54:03.201073 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f2388674-cc7f-4927-9bf8-2157f12a66f4-bundle\") pod \"f2388674-cc7f-4927-9bf8-2157f12a66f4\" (UID: \"f2388674-cc7f-4927-9bf8-2157f12a66f4\") " Jan 30 06:54:03 crc kubenswrapper[4520]: I0130 06:54:03.201144 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnchw\" (UniqueName: \"kubernetes.io/projected/f2388674-cc7f-4927-9bf8-2157f12a66f4-kube-api-access-cnchw\") pod \"f2388674-cc7f-4927-9bf8-2157f12a66f4\" (UID: \"f2388674-cc7f-4927-9bf8-2157f12a66f4\") " Jan 30 06:54:03 crc kubenswrapper[4520]: I0130 06:54:03.201165 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f2388674-cc7f-4927-9bf8-2157f12a66f4-util\") pod \"f2388674-cc7f-4927-9bf8-2157f12a66f4\" (UID: \"f2388674-cc7f-4927-9bf8-2157f12a66f4\") " Jan 30 06:54:03 crc kubenswrapper[4520]: I0130 06:54:03.201885 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2388674-cc7f-4927-9bf8-2157f12a66f4-bundle" (OuterVolumeSpecName: "bundle") pod "f2388674-cc7f-4927-9bf8-2157f12a66f4" (UID: "f2388674-cc7f-4927-9bf8-2157f12a66f4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:54:03 crc kubenswrapper[4520]: I0130 06:54:03.206649 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2388674-cc7f-4927-9bf8-2157f12a66f4-kube-api-access-cnchw" (OuterVolumeSpecName: "kube-api-access-cnchw") pod "f2388674-cc7f-4927-9bf8-2157f12a66f4" (UID: "f2388674-cc7f-4927-9bf8-2157f12a66f4"). InnerVolumeSpecName "kube-api-access-cnchw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:54:03 crc kubenswrapper[4520]: I0130 06:54:03.211055 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2388674-cc7f-4927-9bf8-2157f12a66f4-util" (OuterVolumeSpecName: "util") pod "f2388674-cc7f-4927-9bf8-2157f12a66f4" (UID: "f2388674-cc7f-4927-9bf8-2157f12a66f4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:54:03 crc kubenswrapper[4520]: I0130 06:54:03.302825 4520 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f2388674-cc7f-4927-9bf8-2157f12a66f4-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:54:03 crc kubenswrapper[4520]: I0130 06:54:03.302865 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnchw\" (UniqueName: \"kubernetes.io/projected/f2388674-cc7f-4927-9bf8-2157f12a66f4-kube-api-access-cnchw\") on node \"crc\" DevicePath \"\"" Jan 30 06:54:03 crc kubenswrapper[4520]: I0130 06:54:03.302880 4520 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f2388674-cc7f-4927-9bf8-2157f12a66f4-util\") on node \"crc\" DevicePath \"\"" Jan 30 06:54:03 crc kubenswrapper[4520]: I0130 06:54:03.886167 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl" event={"ID":"f2388674-cc7f-4927-9bf8-2157f12a66f4","Type":"ContainerDied","Data":"18c9d21693523f1c6e4ae7186d949aa209c39863364dfe1ea9a13fdf052a4626"} Jan 30 06:54:03 crc kubenswrapper[4520]: I0130 06:54:03.886251 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18c9d21693523f1c6e4ae7186d949aa209c39863364dfe1ea9a13fdf052a4626" Jan 30 06:54:03 crc kubenswrapper[4520]: I0130 06:54:03.886212 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f4fdl" Jan 30 06:54:07 crc kubenswrapper[4520]: I0130 06:54:07.587245 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-c9ttx"] Jan 30 06:54:07 crc kubenswrapper[4520]: E0130 06:54:07.587888 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2388674-cc7f-4927-9bf8-2157f12a66f4" containerName="pull" Jan 30 06:54:07 crc kubenswrapper[4520]: I0130 06:54:07.587903 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2388674-cc7f-4927-9bf8-2157f12a66f4" containerName="pull" Jan 30 06:54:07 crc kubenswrapper[4520]: E0130 06:54:07.587911 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2388674-cc7f-4927-9bf8-2157f12a66f4" containerName="extract" Jan 30 06:54:07 crc kubenswrapper[4520]: I0130 06:54:07.587917 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2388674-cc7f-4927-9bf8-2157f12a66f4" containerName="extract" Jan 30 06:54:07 crc kubenswrapper[4520]: E0130 06:54:07.587928 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2388674-cc7f-4927-9bf8-2157f12a66f4" containerName="util" Jan 30 06:54:07 crc kubenswrapper[4520]: I0130 06:54:07.587934 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2388674-cc7f-4927-9bf8-2157f12a66f4" containerName="util" Jan 30 06:54:07 crc kubenswrapper[4520]: I0130 06:54:07.588044 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2388674-cc7f-4927-9bf8-2157f12a66f4" containerName="extract" Jan 30 06:54:07 crc kubenswrapper[4520]: I0130 06:54:07.588403 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-c9ttx" Jan 30 06:54:07 crc kubenswrapper[4520]: I0130 06:54:07.589698 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-tn8mv" Jan 30 06:54:07 crc kubenswrapper[4520]: I0130 06:54:07.590630 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 30 06:54:07 crc kubenswrapper[4520]: I0130 06:54:07.591001 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 30 06:54:07 crc kubenswrapper[4520]: I0130 06:54:07.603507 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-c9ttx"] Jan 30 06:54:07 crc kubenswrapper[4520]: I0130 06:54:07.752429 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmlcq\" (UniqueName: \"kubernetes.io/projected/d050cc95-e46d-472e-8496-4a918046dd76-kube-api-access-hmlcq\") pod \"nmstate-operator-646758c888-c9ttx\" (UID: \"d050cc95-e46d-472e-8496-4a918046dd76\") " pod="openshift-nmstate/nmstate-operator-646758c888-c9ttx" Jan 30 06:54:07 crc kubenswrapper[4520]: I0130 06:54:07.854565 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmlcq\" (UniqueName: \"kubernetes.io/projected/d050cc95-e46d-472e-8496-4a918046dd76-kube-api-access-hmlcq\") pod \"nmstate-operator-646758c888-c9ttx\" (UID: \"d050cc95-e46d-472e-8496-4a918046dd76\") " pod="openshift-nmstate/nmstate-operator-646758c888-c9ttx" Jan 30 06:54:07 crc kubenswrapper[4520]: I0130 06:54:07.871161 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmlcq\" (UniqueName: \"kubernetes.io/projected/d050cc95-e46d-472e-8496-4a918046dd76-kube-api-access-hmlcq\") pod \"nmstate-operator-646758c888-c9ttx\" (UID: \"d050cc95-e46d-472e-8496-4a918046dd76\") " pod="openshift-nmstate/nmstate-operator-646758c888-c9ttx" Jan 30 06:54:07 crc kubenswrapper[4520]: I0130 06:54:07.900317 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-c9ttx" Jan 30 06:54:08 crc kubenswrapper[4520]: I0130 06:54:08.060385 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-c9ttx"] Jan 30 06:54:08 crc kubenswrapper[4520]: W0130 06:54:08.065749 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd050cc95_e46d_472e_8496_4a918046dd76.slice/crio-3d5c984fbb95383d8239f2a4fcd946256457436f72a7f099cd2e1730641079d3 WatchSource:0}: Error finding container 3d5c984fbb95383d8239f2a4fcd946256457436f72a7f099cd2e1730641079d3: Status 404 returned error can't find the container with id 3d5c984fbb95383d8239f2a4fcd946256457436f72a7f099cd2e1730641079d3 Jan 30 06:54:08 crc kubenswrapper[4520]: I0130 06:54:08.911293 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-c9ttx" event={"ID":"d050cc95-e46d-472e-8496-4a918046dd76","Type":"ContainerStarted","Data":"3d5c984fbb95383d8239f2a4fcd946256457436f72a7f099cd2e1730641079d3"} Jan 30 06:54:10 crc kubenswrapper[4520]: I0130 06:54:10.921662 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-c9ttx" event={"ID":"d050cc95-e46d-472e-8496-4a918046dd76","Type":"ContainerStarted","Data":"8510dcc48e0e2d91098422d82476d30100311d929953f049cfcbe42b705b9d5a"} Jan 30 06:54:10 crc kubenswrapper[4520]: I0130 06:54:10.937453 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-c9ttx" podStartSLOduration=2.04511081 podStartE2EDuration="3.93743493s" podCreationTimestamp="2026-01-30 06:54:07 +0000 UTC" firstStartedPulling="2026-01-30 06:54:08.0680702 +0000 UTC m=+561.696422381" lastFinishedPulling="2026-01-30 06:54:09.960394321 +0000 UTC m=+563.588746501" observedRunningTime="2026-01-30 06:54:10.936057551 +0000 UTC m=+564.564409733" watchObservedRunningTime="2026-01-30 06:54:10.93743493 +0000 UTC m=+564.565787112" Jan 30 06:54:11 crc kubenswrapper[4520]: I0130 06:54:11.789035 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-hfjq6"] Jan 30 06:54:11 crc kubenswrapper[4520]: I0130 06:54:11.789958 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-hfjq6" Jan 30 06:54:11 crc kubenswrapper[4520]: I0130 06:54:11.793621 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-2dwlv" Jan 30 06:54:11 crc kubenswrapper[4520]: I0130 06:54:11.804638 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-hfjq6"] Jan 30 06:54:11 crc kubenswrapper[4520]: I0130 06:54:11.863027 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-6h87w"] Jan 30 06:54:11 crc kubenswrapper[4520]: I0130 06:54:11.864914 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6h87w" Jan 30 06:54:11 crc kubenswrapper[4520]: I0130 06:54:11.869781 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 30 06:54:11 crc kubenswrapper[4520]: I0130 06:54:11.872166 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-6h87w"] Jan 30 06:54:11 crc kubenswrapper[4520]: I0130 06:54:11.879621 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-2p6bm"] Jan 30 06:54:11 crc kubenswrapper[4520]: I0130 06:54:11.880813 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-2p6bm" Jan 30 06:54:11 crc kubenswrapper[4520]: I0130 06:54:11.914201 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89xzp\" (UniqueName: \"kubernetes.io/projected/d078ea8f-160b-4f2f-8ac8-ce75a770b2f8-kube-api-access-89xzp\") pod \"nmstate-metrics-54757c584b-hfjq6\" (UID: \"d078ea8f-160b-4f2f-8ac8-ce75a770b2f8\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-hfjq6" Jan 30 06:54:11 crc kubenswrapper[4520]: I0130 06:54:11.988483 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-bwppr"] Jan 30 06:54:11 crc kubenswrapper[4520]: I0130 06:54:11.989876 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bwppr" Jan 30 06:54:11 crc kubenswrapper[4520]: I0130 06:54:11.995044 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 30 06:54:11 crc kubenswrapper[4520]: I0130 06:54:11.995079 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 30 06:54:11 crc kubenswrapper[4520]: I0130 06:54:11.995249 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-bwppr"] Jan 30 06:54:11 crc kubenswrapper[4520]: I0130 06:54:11.995278 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-jf8ck" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.015467 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3618b659-b207-4208-9426-ccba22533890-dbus-socket\") pod \"nmstate-handler-2p6bm\" (UID: \"3618b659-b207-4208-9426-ccba22533890\") " pod="openshift-nmstate/nmstate-handler-2p6bm" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.015551 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89xzp\" (UniqueName: \"kubernetes.io/projected/d078ea8f-160b-4f2f-8ac8-ce75a770b2f8-kube-api-access-89xzp\") pod \"nmstate-metrics-54757c584b-hfjq6\" (UID: \"d078ea8f-160b-4f2f-8ac8-ce75a770b2f8\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-hfjq6" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.015611 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbv4g\" (UniqueName: \"kubernetes.io/projected/3618b659-b207-4208-9426-ccba22533890-kube-api-access-bbv4g\") pod \"nmstate-handler-2p6bm\" (UID: \"3618b659-b207-4208-9426-ccba22533890\") " pod="openshift-nmstate/nmstate-handler-2p6bm" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.015673 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtmbj\" (UniqueName: \"kubernetes.io/projected/f16e0121-e604-4297-8068-53389b66f567-kube-api-access-qtmbj\") pod \"nmstate-webhook-8474b5b9d8-6h87w\" (UID: \"f16e0121-e604-4297-8068-53389b66f567\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6h87w" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.015721 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3618b659-b207-4208-9426-ccba22533890-nmstate-lock\") pod \"nmstate-handler-2p6bm\" (UID: \"3618b659-b207-4208-9426-ccba22533890\") " pod="openshift-nmstate/nmstate-handler-2p6bm" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.015757 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3618b659-b207-4208-9426-ccba22533890-ovs-socket\") pod \"nmstate-handler-2p6bm\" (UID: \"3618b659-b207-4208-9426-ccba22533890\") " pod="openshift-nmstate/nmstate-handler-2p6bm" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.015815 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f16e0121-e604-4297-8068-53389b66f567-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-6h87w\" (UID: \"f16e0121-e604-4297-8068-53389b66f567\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6h87w" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.034500 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89xzp\" (UniqueName: \"kubernetes.io/projected/d078ea8f-160b-4f2f-8ac8-ce75a770b2f8-kube-api-access-89xzp\") pod \"nmstate-metrics-54757c584b-hfjq6\" (UID: \"d078ea8f-160b-4f2f-8ac8-ce75a770b2f8\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-hfjq6" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.112209 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-hfjq6" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.117554 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3618b659-b207-4208-9426-ccba22533890-nmstate-lock\") pod \"nmstate-handler-2p6bm\" (UID: \"3618b659-b207-4208-9426-ccba22533890\") " pod="openshift-nmstate/nmstate-handler-2p6bm" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.117605 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3618b659-b207-4208-9426-ccba22533890-ovs-socket\") pod \"nmstate-handler-2p6bm\" (UID: \"3618b659-b207-4208-9426-ccba22533890\") " pod="openshift-nmstate/nmstate-handler-2p6bm" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.117647 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9zxf\" (UniqueName: \"kubernetes.io/projected/5a68fd5c-7af7-4e9a-b3f4-af6e9ff989c0-kube-api-access-c9zxf\") pod \"nmstate-console-plugin-7754f76f8b-bwppr\" (UID: \"5a68fd5c-7af7-4e9a-b3f4-af6e9ff989c0\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bwppr" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.117693 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5a68fd5c-7af7-4e9a-b3f4-af6e9ff989c0-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-bwppr\" (UID: \"5a68fd5c-7af7-4e9a-b3f4-af6e9ff989c0\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bwppr" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.117733 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f16e0121-e604-4297-8068-53389b66f567-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-6h87w\" (UID: \"f16e0121-e604-4297-8068-53389b66f567\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6h87w" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.117765 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3618b659-b207-4208-9426-ccba22533890-dbus-socket\") pod \"nmstate-handler-2p6bm\" (UID: \"3618b659-b207-4208-9426-ccba22533890\") " pod="openshift-nmstate/nmstate-handler-2p6bm" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.117759 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3618b659-b207-4208-9426-ccba22533890-nmstate-lock\") pod \"nmstate-handler-2p6bm\" (UID: \"3618b659-b207-4208-9426-ccba22533890\") " pod="openshift-nmstate/nmstate-handler-2p6bm" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.117836 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbv4g\" (UniqueName: \"kubernetes.io/projected/3618b659-b207-4208-9426-ccba22533890-kube-api-access-bbv4g\") pod \"nmstate-handler-2p6bm\" (UID: \"3618b659-b207-4208-9426-ccba22533890\") " pod="openshift-nmstate/nmstate-handler-2p6bm" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.117805 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3618b659-b207-4208-9426-ccba22533890-ovs-socket\") pod \"nmstate-handler-2p6bm\" (UID: \"3618b659-b207-4208-9426-ccba22533890\") " pod="openshift-nmstate/nmstate-handler-2p6bm" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.118041 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3618b659-b207-4208-9426-ccba22533890-dbus-socket\") pod \"nmstate-handler-2p6bm\" (UID: \"3618b659-b207-4208-9426-ccba22533890\") " pod="openshift-nmstate/nmstate-handler-2p6bm" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.118607 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtmbj\" (UniqueName: \"kubernetes.io/projected/f16e0121-e604-4297-8068-53389b66f567-kube-api-access-qtmbj\") pod \"nmstate-webhook-8474b5b9d8-6h87w\" (UID: \"f16e0121-e604-4297-8068-53389b66f567\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6h87w" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.118680 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a68fd5c-7af7-4e9a-b3f4-af6e9ff989c0-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-bwppr\" (UID: \"5a68fd5c-7af7-4e9a-b3f4-af6e9ff989c0\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bwppr" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.138204 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbv4g\" (UniqueName: \"kubernetes.io/projected/3618b659-b207-4208-9426-ccba22533890-kube-api-access-bbv4g\") pod \"nmstate-handler-2p6bm\" (UID: \"3618b659-b207-4208-9426-ccba22533890\") " pod="openshift-nmstate/nmstate-handler-2p6bm" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.138997 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtmbj\" (UniqueName: \"kubernetes.io/projected/f16e0121-e604-4297-8068-53389b66f567-kube-api-access-qtmbj\") pod \"nmstate-webhook-8474b5b9d8-6h87w\" (UID: \"f16e0121-e604-4297-8068-53389b66f567\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6h87w" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.139403 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f16e0121-e604-4297-8068-53389b66f567-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-6h87w\" (UID: \"f16e0121-e604-4297-8068-53389b66f567\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6h87w" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.172003 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5698ddd759-pv9lh"] Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.173107 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5698ddd759-pv9lh" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.190432 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5698ddd759-pv9lh"] Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.199307 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6h87w" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.213530 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-2p6bm" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.220090 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9zxf\" (UniqueName: \"kubernetes.io/projected/5a68fd5c-7af7-4e9a-b3f4-af6e9ff989c0-kube-api-access-c9zxf\") pod \"nmstate-console-plugin-7754f76f8b-bwppr\" (UID: \"5a68fd5c-7af7-4e9a-b3f4-af6e9ff989c0\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bwppr" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.220130 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5a68fd5c-7af7-4e9a-b3f4-af6e9ff989c0-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-bwppr\" (UID: \"5a68fd5c-7af7-4e9a-b3f4-af6e9ff989c0\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bwppr" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.220223 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a68fd5c-7af7-4e9a-b3f4-af6e9ff989c0-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-bwppr\" (UID: \"5a68fd5c-7af7-4e9a-b3f4-af6e9ff989c0\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bwppr" Jan 30 06:54:12 crc kubenswrapper[4520]: E0130 06:54:12.220366 4520 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 30 06:54:12 crc kubenswrapper[4520]: E0130 06:54:12.220451 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a68fd5c-7af7-4e9a-b3f4-af6e9ff989c0-plugin-serving-cert podName:5a68fd5c-7af7-4e9a-b3f4-af6e9ff989c0 nodeName:}" failed. No retries permitted until 2026-01-30 06:54:12.720431172 +0000 UTC m=+566.348783353 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/5a68fd5c-7af7-4e9a-b3f4-af6e9ff989c0-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-bwppr" (UID: "5a68fd5c-7af7-4e9a-b3f4-af6e9ff989c0") : secret "plugin-serving-cert" not found Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.221034 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5a68fd5c-7af7-4e9a-b3f4-af6e9ff989c0-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-bwppr\" (UID: \"5a68fd5c-7af7-4e9a-b3f4-af6e9ff989c0\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bwppr" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.239159 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9zxf\" (UniqueName: \"kubernetes.io/projected/5a68fd5c-7af7-4e9a-b3f4-af6e9ff989c0-kube-api-access-c9zxf\") pod \"nmstate-console-plugin-7754f76f8b-bwppr\" (UID: \"5a68fd5c-7af7-4e9a-b3f4-af6e9ff989c0\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bwppr" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.322133 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1117f9de-e43c-4012-8d6d-1d975e62a4cb-console-oauth-config\") pod \"console-5698ddd759-pv9lh\" (UID: \"1117f9de-e43c-4012-8d6d-1d975e62a4cb\") " pod="openshift-console/console-5698ddd759-pv9lh" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.322490 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1117f9de-e43c-4012-8d6d-1d975e62a4cb-oauth-serving-cert\") pod \"console-5698ddd759-pv9lh\" (UID: \"1117f9de-e43c-4012-8d6d-1d975e62a4cb\") " pod="openshift-console/console-5698ddd759-pv9lh" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.322570 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1117f9de-e43c-4012-8d6d-1d975e62a4cb-console-serving-cert\") pod \"console-5698ddd759-pv9lh\" (UID: \"1117f9de-e43c-4012-8d6d-1d975e62a4cb\") " pod="openshift-console/console-5698ddd759-pv9lh" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.322672 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1117f9de-e43c-4012-8d6d-1d975e62a4cb-service-ca\") pod \"console-5698ddd759-pv9lh\" (UID: \"1117f9de-e43c-4012-8d6d-1d975e62a4cb\") " pod="openshift-console/console-5698ddd759-pv9lh" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.322726 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1117f9de-e43c-4012-8d6d-1d975e62a4cb-console-config\") pod \"console-5698ddd759-pv9lh\" (UID: \"1117f9de-e43c-4012-8d6d-1d975e62a4cb\") " pod="openshift-console/console-5698ddd759-pv9lh" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.322838 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1117f9de-e43c-4012-8d6d-1d975e62a4cb-trusted-ca-bundle\") pod \"console-5698ddd759-pv9lh\" (UID: \"1117f9de-e43c-4012-8d6d-1d975e62a4cb\") " pod="openshift-console/console-5698ddd759-pv9lh" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.322876 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22gzl\" (UniqueName: \"kubernetes.io/projected/1117f9de-e43c-4012-8d6d-1d975e62a4cb-kube-api-access-22gzl\") pod \"console-5698ddd759-pv9lh\" (UID: \"1117f9de-e43c-4012-8d6d-1d975e62a4cb\") " pod="openshift-console/console-5698ddd759-pv9lh" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.400740 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-hfjq6"] Jan 30 06:54:12 crc kubenswrapper[4520]: W0130 06:54:12.406222 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd078ea8f_160b_4f2f_8ac8_ce75a770b2f8.slice/crio-bdc12e5d825069dee10c493704a87d41cf9a4c9f057b3585d2c01896f7dc7ebe WatchSource:0}: Error finding container bdc12e5d825069dee10c493704a87d41cf9a4c9f057b3585d2c01896f7dc7ebe: Status 404 returned error can't find the container with id bdc12e5d825069dee10c493704a87d41cf9a4c9f057b3585d2c01896f7dc7ebe Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.424875 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1117f9de-e43c-4012-8d6d-1d975e62a4cb-console-serving-cert\") pod \"console-5698ddd759-pv9lh\" (UID: \"1117f9de-e43c-4012-8d6d-1d975e62a4cb\") " pod="openshift-console/console-5698ddd759-pv9lh" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.424973 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1117f9de-e43c-4012-8d6d-1d975e62a4cb-service-ca\") pod \"console-5698ddd759-pv9lh\" (UID: \"1117f9de-e43c-4012-8d6d-1d975e62a4cb\") " pod="openshift-console/console-5698ddd759-pv9lh" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.425020 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1117f9de-e43c-4012-8d6d-1d975e62a4cb-console-config\") pod \"console-5698ddd759-pv9lh\" (UID: \"1117f9de-e43c-4012-8d6d-1d975e62a4cb\") " pod="openshift-console/console-5698ddd759-pv9lh" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.425077 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1117f9de-e43c-4012-8d6d-1d975e62a4cb-trusted-ca-bundle\") pod \"console-5698ddd759-pv9lh\" (UID: \"1117f9de-e43c-4012-8d6d-1d975e62a4cb\") " pod="openshift-console/console-5698ddd759-pv9lh" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.425102 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22gzl\" (UniqueName: \"kubernetes.io/projected/1117f9de-e43c-4012-8d6d-1d975e62a4cb-kube-api-access-22gzl\") pod \"console-5698ddd759-pv9lh\" (UID: \"1117f9de-e43c-4012-8d6d-1d975e62a4cb\") " pod="openshift-console/console-5698ddd759-pv9lh" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.425128 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1117f9de-e43c-4012-8d6d-1d975e62a4cb-console-oauth-config\") pod \"console-5698ddd759-pv9lh\" (UID: \"1117f9de-e43c-4012-8d6d-1d975e62a4cb\") " pod="openshift-console/console-5698ddd759-pv9lh" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.425171 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1117f9de-e43c-4012-8d6d-1d975e62a4cb-oauth-serving-cert\") pod \"console-5698ddd759-pv9lh\" (UID: \"1117f9de-e43c-4012-8d6d-1d975e62a4cb\") " pod="openshift-console/console-5698ddd759-pv9lh" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.426481 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1117f9de-e43c-4012-8d6d-1d975e62a4cb-service-ca\") pod \"console-5698ddd759-pv9lh\" (UID: \"1117f9de-e43c-4012-8d6d-1d975e62a4cb\") " pod="openshift-console/console-5698ddd759-pv9lh" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.426568 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1117f9de-e43c-4012-8d6d-1d975e62a4cb-console-config\") pod \"console-5698ddd759-pv9lh\" (UID: \"1117f9de-e43c-4012-8d6d-1d975e62a4cb\") " pod="openshift-console/console-5698ddd759-pv9lh" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.427403 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1117f9de-e43c-4012-8d6d-1d975e62a4cb-trusted-ca-bundle\") pod \"console-5698ddd759-pv9lh\" (UID: \"1117f9de-e43c-4012-8d6d-1d975e62a4cb\") " pod="openshift-console/console-5698ddd759-pv9lh" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.427440 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1117f9de-e43c-4012-8d6d-1d975e62a4cb-oauth-serving-cert\") pod \"console-5698ddd759-pv9lh\" (UID: \"1117f9de-e43c-4012-8d6d-1d975e62a4cb\") " pod="openshift-console/console-5698ddd759-pv9lh" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.431459 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1117f9de-e43c-4012-8d6d-1d975e62a4cb-console-serving-cert\") pod \"console-5698ddd759-pv9lh\" (UID: \"1117f9de-e43c-4012-8d6d-1d975e62a4cb\") " pod="openshift-console/console-5698ddd759-pv9lh" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.436418 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1117f9de-e43c-4012-8d6d-1d975e62a4cb-console-oauth-config\") pod \"console-5698ddd759-pv9lh\" (UID: \"1117f9de-e43c-4012-8d6d-1d975e62a4cb\") " pod="openshift-console/console-5698ddd759-pv9lh" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.444843 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22gzl\" (UniqueName: \"kubernetes.io/projected/1117f9de-e43c-4012-8d6d-1d975e62a4cb-kube-api-access-22gzl\") pod \"console-5698ddd759-pv9lh\" (UID: \"1117f9de-e43c-4012-8d6d-1d975e62a4cb\") " pod="openshift-console/console-5698ddd759-pv9lh" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.453692 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-6h87w"] Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.489172 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5698ddd759-pv9lh" Jan 30 06:54:12 crc kubenswrapper[4520]: W0130 06:54:12.672069 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1117f9de_e43c_4012_8d6d_1d975e62a4cb.slice/crio-a1b702a35b62ce4b717a4818bdf2c4856de972b81b89d77208d38a5ad7e609c3 WatchSource:0}: Error finding container a1b702a35b62ce4b717a4818bdf2c4856de972b81b89d77208d38a5ad7e609c3: Status 404 returned error can't find the container with id a1b702a35b62ce4b717a4818bdf2c4856de972b81b89d77208d38a5ad7e609c3 Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.672149 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5698ddd759-pv9lh"] Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.728855 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a68fd5c-7af7-4e9a-b3f4-af6e9ff989c0-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-bwppr\" (UID: \"5a68fd5c-7af7-4e9a-b3f4-af6e9ff989c0\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bwppr" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.733303 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a68fd5c-7af7-4e9a-b3f4-af6e9ff989c0-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-bwppr\" (UID: \"5a68fd5c-7af7-4e9a-b3f4-af6e9ff989c0\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bwppr" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.910490 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bwppr" Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.939397 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-2p6bm" event={"ID":"3618b659-b207-4208-9426-ccba22533890","Type":"ContainerStarted","Data":"d015b8226bd5a9a6607bf184b449fdea3527f2f400fab1cf8924da516a4054d4"} Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.941311 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-hfjq6" event={"ID":"d078ea8f-160b-4f2f-8ac8-ce75a770b2f8","Type":"ContainerStarted","Data":"bdc12e5d825069dee10c493704a87d41cf9a4c9f057b3585d2c01896f7dc7ebe"} Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.944393 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5698ddd759-pv9lh" event={"ID":"1117f9de-e43c-4012-8d6d-1d975e62a4cb","Type":"ContainerStarted","Data":"d1b024725b555d0a42d70c85fb9a7049d0e412b55f12985c1d5c0947c9d98344"} Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.944444 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5698ddd759-pv9lh" event={"ID":"1117f9de-e43c-4012-8d6d-1d975e62a4cb","Type":"ContainerStarted","Data":"a1b702a35b62ce4b717a4818bdf2c4856de972b81b89d77208d38a5ad7e609c3"} Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.946702 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6h87w" event={"ID":"f16e0121-e604-4297-8068-53389b66f567","Type":"ContainerStarted","Data":"fc17a42f9bff84ad7c675ed830842e6cf257c3dc1311712732ea408400ab91cb"} Jan 30 06:54:12 crc kubenswrapper[4520]: I0130 06:54:12.962230 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5698ddd759-pv9lh" podStartSLOduration=0.96221823 podStartE2EDuration="962.21823ms" podCreationTimestamp="2026-01-30 06:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:54:12.961202492 +0000 UTC m=+566.589554672" watchObservedRunningTime="2026-01-30 06:54:12.96221823 +0000 UTC m=+566.590570411" Jan 30 06:54:13 crc kubenswrapper[4520]: I0130 06:54:13.095604 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-bwppr"] Jan 30 06:54:13 crc kubenswrapper[4520]: I0130 06:54:13.958619 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bwppr" event={"ID":"5a68fd5c-7af7-4e9a-b3f4-af6e9ff989c0","Type":"ContainerStarted","Data":"6e91b9bac5b8ca6535465c513a4e593f6ae4f6938cd53150f57f1d77f0490406"} Jan 30 06:54:15 crc kubenswrapper[4520]: I0130 06:54:15.972745 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-hfjq6" event={"ID":"d078ea8f-160b-4f2f-8ac8-ce75a770b2f8","Type":"ContainerStarted","Data":"8139fa2394de25430e9da6bd4d3cd64966cdf4e65a4d2bcb85a88f3efabb06d1"} Jan 30 06:54:15 crc kubenswrapper[4520]: I0130 06:54:15.975080 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6h87w" event={"ID":"f16e0121-e604-4297-8068-53389b66f567","Type":"ContainerStarted","Data":"58bef555e281f596ac98c522d354f9f3bae248f8db5998adae5f1df17d542be6"} Jan 30 06:54:15 crc kubenswrapper[4520]: I0130 06:54:15.975427 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6h87w" Jan 30 06:54:15 crc kubenswrapper[4520]: I0130 06:54:15.980463 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-2p6bm" event={"ID":"3618b659-b207-4208-9426-ccba22533890","Type":"ContainerStarted","Data":"4640e427f3ae106d359b01fef7fcf400f7f0d91006e9b6749d20cd693a7f6117"} Jan 30 06:54:15 crc kubenswrapper[4520]: I0130 06:54:15.981202 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-2p6bm" Jan 30 06:54:16 crc kubenswrapper[4520]: I0130 06:54:16.000473 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6h87w" podStartSLOduration=2.5147033629999997 podStartE2EDuration="5.000457205s" podCreationTimestamp="2026-01-30 06:54:11 +0000 UTC" firstStartedPulling="2026-01-30 06:54:12.465651564 +0000 UTC m=+566.094003745" lastFinishedPulling="2026-01-30 06:54:14.951405405 +0000 UTC m=+568.579757587" observedRunningTime="2026-01-30 06:54:16.00040109 +0000 UTC m=+569.628753270" watchObservedRunningTime="2026-01-30 06:54:16.000457205 +0000 UTC m=+569.628809386" Jan 30 06:54:16 crc kubenswrapper[4520]: I0130 06:54:16.019919 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-2p6bm" podStartSLOduration=2.3116308119999998 podStartE2EDuration="5.019906757s" podCreationTimestamp="2026-01-30 06:54:11 +0000 UTC" firstStartedPulling="2026-01-30 06:54:12.247717662 +0000 UTC m=+565.876069843" lastFinishedPulling="2026-01-30 06:54:14.955993607 +0000 UTC m=+568.584345788" observedRunningTime="2026-01-30 06:54:16.016958366 +0000 UTC m=+569.645310547" watchObservedRunningTime="2026-01-30 06:54:16.019906757 +0000 UTC m=+569.648258938" Jan 30 06:54:16 crc kubenswrapper[4520]: I0130 06:54:16.989577 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bwppr" event={"ID":"5a68fd5c-7af7-4e9a-b3f4-af6e9ff989c0","Type":"ContainerStarted","Data":"7e2bee8626ef2111a728ea5aa3ee494a5a9c05e4b8cc1094994c12bf9e1b45a4"} Jan 30 06:54:17 crc kubenswrapper[4520]: I0130 06:54:17.013418 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bwppr" podStartSLOduration=3.165920905 podStartE2EDuration="6.01339156s" podCreationTimestamp="2026-01-30 06:54:11 +0000 UTC" firstStartedPulling="2026-01-30 06:54:13.107742775 +0000 UTC m=+566.736094945" lastFinishedPulling="2026-01-30 06:54:15.955213419 +0000 UTC m=+569.583565600" observedRunningTime="2026-01-30 06:54:17.00993258 +0000 UTC m=+570.638284761" watchObservedRunningTime="2026-01-30 06:54:17.01339156 +0000 UTC m=+570.641743741" Jan 30 06:54:18 crc kubenswrapper[4520]: I0130 06:54:18.000501 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-hfjq6" event={"ID":"d078ea8f-160b-4f2f-8ac8-ce75a770b2f8","Type":"ContainerStarted","Data":"096b9eb4d294eb039266ee1e0f2872252ddbf05fee690105094ba186514d94f6"} Jan 30 06:54:18 crc kubenswrapper[4520]: I0130 06:54:18.018122 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-hfjq6" podStartSLOduration=2.237109033 podStartE2EDuration="7.018104935s" podCreationTimestamp="2026-01-30 06:54:11 +0000 UTC" firstStartedPulling="2026-01-30 06:54:12.409002584 +0000 UTC m=+566.037354765" lastFinishedPulling="2026-01-30 06:54:17.189998486 +0000 UTC m=+570.818350667" observedRunningTime="2026-01-30 06:54:18.016064812 +0000 UTC m=+571.644416993" watchObservedRunningTime="2026-01-30 06:54:18.018104935 +0000 UTC m=+571.646457106" Jan 30 06:54:22 crc kubenswrapper[4520]: I0130 06:54:22.240809 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-2p6bm" Jan 30 06:54:22 crc kubenswrapper[4520]: I0130 06:54:22.490183 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5698ddd759-pv9lh" Jan 30 06:54:22 crc kubenswrapper[4520]: I0130 06:54:22.490389 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5698ddd759-pv9lh" Jan 30 06:54:22 crc kubenswrapper[4520]: I0130 06:54:22.495504 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5698ddd759-pv9lh" Jan 30 06:54:23 crc kubenswrapper[4520]: I0130 06:54:23.033720 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5698ddd759-pv9lh" Jan 30 06:54:23 crc kubenswrapper[4520]: I0130 06:54:23.089134 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-nkbdc"] Jan 30 06:54:27 crc kubenswrapper[4520]: I0130 06:54:27.793740 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:54:27 crc kubenswrapper[4520]: I0130 06:54:27.794488 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:54:32 crc kubenswrapper[4520]: I0130 06:54:32.205254 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6h87w" Jan 30 06:54:43 crc kubenswrapper[4520]: I0130 06:54:43.135645 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k2zw"] Jan 30 06:54:43 crc kubenswrapper[4520]: I0130 06:54:43.137150 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k2zw" Jan 30 06:54:43 crc kubenswrapper[4520]: I0130 06:54:43.139194 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 06:54:43 crc kubenswrapper[4520]: I0130 06:54:43.145346 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k2zw"] Jan 30 06:54:43 crc kubenswrapper[4520]: I0130 06:54:43.321913 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xwr7\" (UniqueName: \"kubernetes.io/projected/41ef61d6-a574-45c4-a96c-4068d31bf1ba-kube-api-access-2xwr7\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k2zw\" (UID: \"41ef61d6-a574-45c4-a96c-4068d31bf1ba\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k2zw" Jan 30 06:54:43 crc kubenswrapper[4520]: I0130 06:54:43.322005 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41ef61d6-a574-45c4-a96c-4068d31bf1ba-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k2zw\" (UID: \"41ef61d6-a574-45c4-a96c-4068d31bf1ba\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k2zw" Jan 30 06:54:43 crc kubenswrapper[4520]: I0130 06:54:43.322084 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41ef61d6-a574-45c4-a96c-4068d31bf1ba-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k2zw\" (UID: \"41ef61d6-a574-45c4-a96c-4068d31bf1ba\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k2zw" Jan 30 06:54:43 crc kubenswrapper[4520]: I0130 06:54:43.423344 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xwr7\" (UniqueName: \"kubernetes.io/projected/41ef61d6-a574-45c4-a96c-4068d31bf1ba-kube-api-access-2xwr7\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k2zw\" (UID: \"41ef61d6-a574-45c4-a96c-4068d31bf1ba\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k2zw" Jan 30 06:54:43 crc kubenswrapper[4520]: I0130 06:54:43.423422 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41ef61d6-a574-45c4-a96c-4068d31bf1ba-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k2zw\" (UID: \"41ef61d6-a574-45c4-a96c-4068d31bf1ba\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k2zw" Jan 30 06:54:43 crc kubenswrapper[4520]: I0130 06:54:43.423454 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41ef61d6-a574-45c4-a96c-4068d31bf1ba-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k2zw\" (UID: \"41ef61d6-a574-45c4-a96c-4068d31bf1ba\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k2zw" Jan 30 06:54:43 crc kubenswrapper[4520]: I0130 06:54:43.423964 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41ef61d6-a574-45c4-a96c-4068d31bf1ba-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k2zw\" (UID: \"41ef61d6-a574-45c4-a96c-4068d31bf1ba\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k2zw" Jan 30 06:54:43 crc kubenswrapper[4520]: I0130 06:54:43.424039 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41ef61d6-a574-45c4-a96c-4068d31bf1ba-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k2zw\" (UID: \"41ef61d6-a574-45c4-a96c-4068d31bf1ba\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k2zw" Jan 30 06:54:43 crc kubenswrapper[4520]: I0130 06:54:43.441990 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xwr7\" (UniqueName: \"kubernetes.io/projected/41ef61d6-a574-45c4-a96c-4068d31bf1ba-kube-api-access-2xwr7\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k2zw\" (UID: \"41ef61d6-a574-45c4-a96c-4068d31bf1ba\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k2zw" Jan 30 06:54:43 crc kubenswrapper[4520]: I0130 06:54:43.452146 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k2zw" Jan 30 06:54:43 crc kubenswrapper[4520]: I0130 06:54:43.814290 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k2zw"] Jan 30 06:54:44 crc kubenswrapper[4520]: I0130 06:54:44.167562 4520 generic.go:334] "Generic (PLEG): container finished" podID="41ef61d6-a574-45c4-a96c-4068d31bf1ba" containerID="41eb40030b509f82e81242a4f863b02f94785de092b7bc631c06b353b6e41e17" exitCode=0 Jan 30 06:54:44 crc kubenswrapper[4520]: I0130 06:54:44.167680 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k2zw" event={"ID":"41ef61d6-a574-45c4-a96c-4068d31bf1ba","Type":"ContainerDied","Data":"41eb40030b509f82e81242a4f863b02f94785de092b7bc631c06b353b6e41e17"} Jan 30 06:54:44 crc kubenswrapper[4520]: I0130 06:54:44.167994 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k2zw" event={"ID":"41ef61d6-a574-45c4-a96c-4068d31bf1ba","Type":"ContainerStarted","Data":"cb559c2dcf18295d761648494a06e6116191487f5268af5562fa0daa0db0b86e"} Jan 30 06:54:46 crc kubenswrapper[4520]: I0130 06:54:46.181003 4520 generic.go:334] "Generic (PLEG): container finished" podID="41ef61d6-a574-45c4-a96c-4068d31bf1ba" containerID="853cc0d21c5d39095ffb35be31d5b5ffa15c8e7bd9bea67cee5c86e1567a200c" exitCode=0 Jan 30 06:54:46 crc kubenswrapper[4520]: I0130 06:54:46.181077 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k2zw" event={"ID":"41ef61d6-a574-45c4-a96c-4068d31bf1ba","Type":"ContainerDied","Data":"853cc0d21c5d39095ffb35be31d5b5ffa15c8e7bd9bea67cee5c86e1567a200c"} Jan 30 06:54:47 crc kubenswrapper[4520]: I0130 06:54:47.189934 4520 generic.go:334] "Generic (PLEG): container finished" podID="41ef61d6-a574-45c4-a96c-4068d31bf1ba" containerID="e22d4d616f943c50e41cfe8afe48cdcef526a08beab1cec439028bcd3fe1f552" exitCode=0 Jan 30 06:54:47 crc kubenswrapper[4520]: I0130 06:54:47.190019 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k2zw" event={"ID":"41ef61d6-a574-45c4-a96c-4068d31bf1ba","Type":"ContainerDied","Data":"e22d4d616f943c50e41cfe8afe48cdcef526a08beab1cec439028bcd3fe1f552"} Jan 30 06:54:48 crc kubenswrapper[4520]: I0130 06:54:48.116363 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-nkbdc" podUID="d3fdb20f-d725-45b1-9825-8c2b6f6fd24b" containerName="console" containerID="cri-o://e0ee93cdf9d69b336b883ad09cdcb8a49d8c3ce24241236e59262d082d023873" gracePeriod=15 Jan 30 06:54:48 crc kubenswrapper[4520]: I0130 06:54:48.404916 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k2zw" Jan 30 06:54:48 crc kubenswrapper[4520]: I0130 06:54:48.463293 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-nkbdc_d3fdb20f-d725-45b1-9825-8c2b6f6fd24b/console/0.log" Jan 30 06:54:48 crc kubenswrapper[4520]: I0130 06:54:48.463376 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-nkbdc" Jan 30 06:54:48 crc kubenswrapper[4520]: I0130 06:54:48.581259 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-console-serving-cert\") pod \"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b\" (UID: \"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b\") " Jan 30 06:54:48 crc kubenswrapper[4520]: I0130 06:54:48.581925 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41ef61d6-a574-45c4-a96c-4068d31bf1ba-util\") pod \"41ef61d6-a574-45c4-a96c-4068d31bf1ba\" (UID: \"41ef61d6-a574-45c4-a96c-4068d31bf1ba\") " Jan 30 06:54:48 crc kubenswrapper[4520]: I0130 06:54:48.581969 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-console-config\") pod \"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b\" (UID: \"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b\") " Jan 30 06:54:48 crc kubenswrapper[4520]: I0130 06:54:48.581996 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xwr7\" (UniqueName: \"kubernetes.io/projected/41ef61d6-a574-45c4-a96c-4068d31bf1ba-kube-api-access-2xwr7\") pod \"41ef61d6-a574-45c4-a96c-4068d31bf1ba\" (UID: \"41ef61d6-a574-45c4-a96c-4068d31bf1ba\") " Jan 30 06:54:48 crc kubenswrapper[4520]: I0130 06:54:48.582033 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41ef61d6-a574-45c4-a96c-4068d31bf1ba-bundle\") pod \"41ef61d6-a574-45c4-a96c-4068d31bf1ba\" (UID: \"41ef61d6-a574-45c4-a96c-4068d31bf1ba\") " Jan 30 06:54:48 crc kubenswrapper[4520]: I0130 06:54:48.582065 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-console-oauth-config\") pod \"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b\" (UID: \"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b\") " Jan 30 06:54:48 crc kubenswrapper[4520]: I0130 06:54:48.582092 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-oauth-serving-cert\") pod \"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b\" (UID: \"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b\") " Jan 30 06:54:48 crc kubenswrapper[4520]: I0130 06:54:48.582113 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vddvg\" (UniqueName: \"kubernetes.io/projected/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-kube-api-access-vddvg\") pod \"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b\" (UID: \"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b\") " Jan 30 06:54:48 crc kubenswrapper[4520]: I0130 06:54:48.582141 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-service-ca\") pod \"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b\" (UID: \"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b\") " Jan 30 06:54:48 crc kubenswrapper[4520]: I0130 06:54:48.582165 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-trusted-ca-bundle\") pod \"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b\" (UID: \"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b\") " Jan 30 06:54:48 crc kubenswrapper[4520]: I0130 06:54:48.582864 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d3fdb20f-d725-45b1-9825-8c2b6f6fd24b" (UID: "d3fdb20f-d725-45b1-9825-8c2b6f6fd24b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:54:48 crc kubenswrapper[4520]: I0130 06:54:48.583329 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "d3fdb20f-d725-45b1-9825-8c2b6f6fd24b" (UID: "d3fdb20f-d725-45b1-9825-8c2b6f6fd24b"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:54:48 crc kubenswrapper[4520]: I0130 06:54:48.583352 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-service-ca" (OuterVolumeSpecName: "service-ca") pod "d3fdb20f-d725-45b1-9825-8c2b6f6fd24b" (UID: "d3fdb20f-d725-45b1-9825-8c2b6f6fd24b"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:54:48 crc kubenswrapper[4520]: I0130 06:54:48.583483 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-console-config" (OuterVolumeSpecName: "console-config") pod "d3fdb20f-d725-45b1-9825-8c2b6f6fd24b" (UID: "d3fdb20f-d725-45b1-9825-8c2b6f6fd24b"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:54:48 crc kubenswrapper[4520]: I0130 06:54:48.584190 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41ef61d6-a574-45c4-a96c-4068d31bf1ba-bundle" (OuterVolumeSpecName: "bundle") pod "41ef61d6-a574-45c4-a96c-4068d31bf1ba" (UID: "41ef61d6-a574-45c4-a96c-4068d31bf1ba"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:54:48 crc kubenswrapper[4520]: I0130 06:54:48.586626 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "d3fdb20f-d725-45b1-9825-8c2b6f6fd24b" (UID: "d3fdb20f-d725-45b1-9825-8c2b6f6fd24b"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:54:48 crc kubenswrapper[4520]: I0130 06:54:48.586687 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41ef61d6-a574-45c4-a96c-4068d31bf1ba-kube-api-access-2xwr7" (OuterVolumeSpecName: "kube-api-access-2xwr7") pod "41ef61d6-a574-45c4-a96c-4068d31bf1ba" (UID: "41ef61d6-a574-45c4-a96c-4068d31bf1ba"). InnerVolumeSpecName "kube-api-access-2xwr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:54:48 crc kubenswrapper[4520]: I0130 06:54:48.586896 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-kube-api-access-vddvg" (OuterVolumeSpecName: "kube-api-access-vddvg") pod "d3fdb20f-d725-45b1-9825-8c2b6f6fd24b" (UID: "d3fdb20f-d725-45b1-9825-8c2b6f6fd24b"). InnerVolumeSpecName "kube-api-access-vddvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:54:48 crc kubenswrapper[4520]: I0130 06:54:48.587051 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "d3fdb20f-d725-45b1-9825-8c2b6f6fd24b" (UID: "d3fdb20f-d725-45b1-9825-8c2b6f6fd24b"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:54:48 crc kubenswrapper[4520]: I0130 06:54:48.591347 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41ef61d6-a574-45c4-a96c-4068d31bf1ba-util" (OuterVolumeSpecName: "util") pod "41ef61d6-a574-45c4-a96c-4068d31bf1ba" (UID: "41ef61d6-a574-45c4-a96c-4068d31bf1ba"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:54:48 crc kubenswrapper[4520]: I0130 06:54:48.683816 4520 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:54:48 crc kubenswrapper[4520]: I0130 06:54:48.683857 4520 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:54:48 crc kubenswrapper[4520]: I0130 06:54:48.683870 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vddvg\" (UniqueName: \"kubernetes.io/projected/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-kube-api-access-vddvg\") on node \"crc\" DevicePath \"\"" Jan 30 06:54:48 crc kubenswrapper[4520]: I0130 06:54:48.683886 4520 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 06:54:48 crc kubenswrapper[4520]: I0130 06:54:48.683897 4520 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:54:48 crc kubenswrapper[4520]: I0130 06:54:48.683906 4520 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 06:54:48 crc kubenswrapper[4520]: I0130 06:54:48.683919 4520 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41ef61d6-a574-45c4-a96c-4068d31bf1ba-util\") on node \"crc\" DevicePath \"\"" Jan 30 06:54:48 crc kubenswrapper[4520]: I0130 06:54:48.683930 4520 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:54:48 crc kubenswrapper[4520]: I0130 06:54:48.683939 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xwr7\" (UniqueName: \"kubernetes.io/projected/41ef61d6-a574-45c4-a96c-4068d31bf1ba-kube-api-access-2xwr7\") on node \"crc\" DevicePath \"\"" Jan 30 06:54:48 crc kubenswrapper[4520]: I0130 06:54:48.683948 4520 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41ef61d6-a574-45c4-a96c-4068d31bf1ba-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:54:49 crc kubenswrapper[4520]: I0130 06:54:49.203392 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k2zw" Jan 30 06:54:49 crc kubenswrapper[4520]: I0130 06:54:49.203384 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k2zw" event={"ID":"41ef61d6-a574-45c4-a96c-4068d31bf1ba","Type":"ContainerDied","Data":"cb559c2dcf18295d761648494a06e6116191487f5268af5562fa0daa0db0b86e"} Jan 30 06:54:49 crc kubenswrapper[4520]: I0130 06:54:49.203496 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb559c2dcf18295d761648494a06e6116191487f5268af5562fa0daa0db0b86e" Jan 30 06:54:49 crc kubenswrapper[4520]: I0130 06:54:49.205132 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-nkbdc_d3fdb20f-d725-45b1-9825-8c2b6f6fd24b/console/0.log" Jan 30 06:54:49 crc kubenswrapper[4520]: I0130 06:54:49.205189 4520 generic.go:334] "Generic (PLEG): container finished" podID="d3fdb20f-d725-45b1-9825-8c2b6f6fd24b" containerID="e0ee93cdf9d69b336b883ad09cdcb8a49d8c3ce24241236e59262d082d023873" exitCode=2 Jan 30 06:54:49 crc kubenswrapper[4520]: I0130 06:54:49.205229 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-nkbdc" event={"ID":"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b","Type":"ContainerDied","Data":"e0ee93cdf9d69b336b883ad09cdcb8a49d8c3ce24241236e59262d082d023873"} Jan 30 06:54:49 crc kubenswrapper[4520]: I0130 06:54:49.205276 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-nkbdc" event={"ID":"d3fdb20f-d725-45b1-9825-8c2b6f6fd24b","Type":"ContainerDied","Data":"fab458684c995ff699beeefb01e6147110c0e96e736306d53c4db0dc677c15fd"} Jan 30 06:54:49 crc kubenswrapper[4520]: I0130 06:54:49.205292 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-nkbdc" Jan 30 06:54:49 crc kubenswrapper[4520]: I0130 06:54:49.205302 4520 scope.go:117] "RemoveContainer" containerID="e0ee93cdf9d69b336b883ad09cdcb8a49d8c3ce24241236e59262d082d023873" Jan 30 06:54:49 crc kubenswrapper[4520]: I0130 06:54:49.225649 4520 scope.go:117] "RemoveContainer" containerID="e0ee93cdf9d69b336b883ad09cdcb8a49d8c3ce24241236e59262d082d023873" Jan 30 06:54:49 crc kubenswrapper[4520]: E0130 06:54:49.226025 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0ee93cdf9d69b336b883ad09cdcb8a49d8c3ce24241236e59262d082d023873\": container with ID starting with e0ee93cdf9d69b336b883ad09cdcb8a49d8c3ce24241236e59262d082d023873 not found: ID does not exist" containerID="e0ee93cdf9d69b336b883ad09cdcb8a49d8c3ce24241236e59262d082d023873" Jan 30 06:54:49 crc kubenswrapper[4520]: I0130 06:54:49.226058 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0ee93cdf9d69b336b883ad09cdcb8a49d8c3ce24241236e59262d082d023873"} err="failed to get container status \"e0ee93cdf9d69b336b883ad09cdcb8a49d8c3ce24241236e59262d082d023873\": rpc error: code = NotFound desc = could not find container \"e0ee93cdf9d69b336b883ad09cdcb8a49d8c3ce24241236e59262d082d023873\": container with ID starting with e0ee93cdf9d69b336b883ad09cdcb8a49d8c3ce24241236e59262d082d023873 not found: ID does not exist" Jan 30 06:54:49 crc kubenswrapper[4520]: I0130 06:54:49.229828 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-nkbdc"] Jan 30 06:54:49 crc kubenswrapper[4520]: I0130 06:54:49.234154 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-nkbdc"] Jan 30 06:54:50 crc kubenswrapper[4520]: I0130 06:54:50.692169 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3fdb20f-d725-45b1-9825-8c2b6f6fd24b" path="/var/lib/kubelet/pods/d3fdb20f-d725-45b1-9825-8c2b6f6fd24b/volumes" Jan 30 06:54:57 crc kubenswrapper[4520]: I0130 06:54:57.794102 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:54:57 crc kubenswrapper[4520]: I0130 06:54:57.794862 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:54:58 crc kubenswrapper[4520]: I0130 06:54:58.511033 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-55c6c66c46-npg7d"] Jan 30 06:54:58 crc kubenswrapper[4520]: E0130 06:54:58.511861 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3fdb20f-d725-45b1-9825-8c2b6f6fd24b" containerName="console" Jan 30 06:54:58 crc kubenswrapper[4520]: I0130 06:54:58.511972 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3fdb20f-d725-45b1-9825-8c2b6f6fd24b" containerName="console" Jan 30 06:54:58 crc kubenswrapper[4520]: E0130 06:54:58.512029 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41ef61d6-a574-45c4-a96c-4068d31bf1ba" containerName="util" Jan 30 06:54:58 crc kubenswrapper[4520]: I0130 06:54:58.512070 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="41ef61d6-a574-45c4-a96c-4068d31bf1ba" containerName="util" Jan 30 06:54:58 crc kubenswrapper[4520]: E0130 06:54:58.512126 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41ef61d6-a574-45c4-a96c-4068d31bf1ba" containerName="extract" Jan 30 06:54:58 crc kubenswrapper[4520]: I0130 06:54:58.512174 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="41ef61d6-a574-45c4-a96c-4068d31bf1ba" containerName="extract" Jan 30 06:54:58 crc kubenswrapper[4520]: E0130 06:54:58.512219 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41ef61d6-a574-45c4-a96c-4068d31bf1ba" containerName="pull" Jan 30 06:54:58 crc kubenswrapper[4520]: I0130 06:54:58.512256 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="41ef61d6-a574-45c4-a96c-4068d31bf1ba" containerName="pull" Jan 30 06:54:58 crc kubenswrapper[4520]: I0130 06:54:58.512412 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3fdb20f-d725-45b1-9825-8c2b6f6fd24b" containerName="console" Jan 30 06:54:58 crc kubenswrapper[4520]: I0130 06:54:58.512460 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="41ef61d6-a574-45c4-a96c-4068d31bf1ba" containerName="extract" Jan 30 06:54:58 crc kubenswrapper[4520]: I0130 06:54:58.512957 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-55c6c66c46-npg7d" Jan 30 06:54:58 crc kubenswrapper[4520]: I0130 06:54:58.514724 4520 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-pgnqf" Jan 30 06:54:58 crc kubenswrapper[4520]: I0130 06:54:58.515705 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8d27dd60-76c2-4896-bd08-736553ae31fb-webhook-cert\") pod \"metallb-operator-controller-manager-55c6c66c46-npg7d\" (UID: \"8d27dd60-76c2-4896-bd08-736553ae31fb\") " pod="metallb-system/metallb-operator-controller-manager-55c6c66c46-npg7d" Jan 30 06:54:58 crc kubenswrapper[4520]: I0130 06:54:58.515758 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cb42\" (UniqueName: \"kubernetes.io/projected/8d27dd60-76c2-4896-bd08-736553ae31fb-kube-api-access-5cb42\") pod \"metallb-operator-controller-manager-55c6c66c46-npg7d\" (UID: \"8d27dd60-76c2-4896-bd08-736553ae31fb\") " pod="metallb-system/metallb-operator-controller-manager-55c6c66c46-npg7d" Jan 30 06:54:58 crc kubenswrapper[4520]: I0130 06:54:58.515784 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8d27dd60-76c2-4896-bd08-736553ae31fb-apiservice-cert\") pod \"metallb-operator-controller-manager-55c6c66c46-npg7d\" (UID: \"8d27dd60-76c2-4896-bd08-736553ae31fb\") " pod="metallb-system/metallb-operator-controller-manager-55c6c66c46-npg7d" Jan 30 06:54:58 crc kubenswrapper[4520]: I0130 06:54:58.519849 4520 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 30 06:54:58 crc kubenswrapper[4520]: I0130 06:54:58.519910 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 30 06:54:58 crc kubenswrapper[4520]: I0130 06:54:58.519910 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 30 06:54:58 crc kubenswrapper[4520]: I0130 06:54:58.520055 4520 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 30 06:54:58 crc kubenswrapper[4520]: I0130 06:54:58.536567 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-55c6c66c46-npg7d"] Jan 30 06:54:58 crc kubenswrapper[4520]: I0130 06:54:58.617202 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8d27dd60-76c2-4896-bd08-736553ae31fb-webhook-cert\") pod \"metallb-operator-controller-manager-55c6c66c46-npg7d\" (UID: \"8d27dd60-76c2-4896-bd08-736553ae31fb\") " pod="metallb-system/metallb-operator-controller-manager-55c6c66c46-npg7d" Jan 30 06:54:58 crc kubenswrapper[4520]: I0130 06:54:58.617250 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cb42\" (UniqueName: \"kubernetes.io/projected/8d27dd60-76c2-4896-bd08-736553ae31fb-kube-api-access-5cb42\") pod \"metallb-operator-controller-manager-55c6c66c46-npg7d\" (UID: \"8d27dd60-76c2-4896-bd08-736553ae31fb\") " pod="metallb-system/metallb-operator-controller-manager-55c6c66c46-npg7d" Jan 30 06:54:58 crc kubenswrapper[4520]: I0130 06:54:58.617276 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8d27dd60-76c2-4896-bd08-736553ae31fb-apiservice-cert\") pod \"metallb-operator-controller-manager-55c6c66c46-npg7d\" (UID: \"8d27dd60-76c2-4896-bd08-736553ae31fb\") " pod="metallb-system/metallb-operator-controller-manager-55c6c66c46-npg7d" Jan 30 06:54:58 crc kubenswrapper[4520]: I0130 06:54:58.626161 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8d27dd60-76c2-4896-bd08-736553ae31fb-apiservice-cert\") pod \"metallb-operator-controller-manager-55c6c66c46-npg7d\" (UID: \"8d27dd60-76c2-4896-bd08-736553ae31fb\") " pod="metallb-system/metallb-operator-controller-manager-55c6c66c46-npg7d" Jan 30 06:54:58 crc kubenswrapper[4520]: I0130 06:54:58.626204 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8d27dd60-76c2-4896-bd08-736553ae31fb-webhook-cert\") pod \"metallb-operator-controller-manager-55c6c66c46-npg7d\" (UID: \"8d27dd60-76c2-4896-bd08-736553ae31fb\") " pod="metallb-system/metallb-operator-controller-manager-55c6c66c46-npg7d" Jan 30 06:54:58 crc kubenswrapper[4520]: I0130 06:54:58.662900 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cb42\" (UniqueName: \"kubernetes.io/projected/8d27dd60-76c2-4896-bd08-736553ae31fb-kube-api-access-5cb42\") pod \"metallb-operator-controller-manager-55c6c66c46-npg7d\" (UID: \"8d27dd60-76c2-4896-bd08-736553ae31fb\") " pod="metallb-system/metallb-operator-controller-manager-55c6c66c46-npg7d" Jan 30 06:54:58 crc kubenswrapper[4520]: I0130 06:54:58.827486 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-55c6c66c46-npg7d" Jan 30 06:54:58 crc kubenswrapper[4520]: I0130 06:54:58.899766 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-76c96b8575-pxtsl"] Jan 30 06:54:58 crc kubenswrapper[4520]: I0130 06:54:58.900838 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-76c96b8575-pxtsl" Jan 30 06:54:58 crc kubenswrapper[4520]: I0130 06:54:58.902250 4520 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-l2x42" Jan 30 06:54:58 crc kubenswrapper[4520]: I0130 06:54:58.903808 4520 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 30 06:54:58 crc kubenswrapper[4520]: I0130 06:54:58.904647 4520 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 06:54:58 crc kubenswrapper[4520]: I0130 06:54:58.911835 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-76c96b8575-pxtsl"] Jan 30 06:54:58 crc kubenswrapper[4520]: I0130 06:54:58.921005 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c8e83470-7d61-4906-9351-b93815bd1c72-apiservice-cert\") pod \"metallb-operator-webhook-server-76c96b8575-pxtsl\" (UID: \"c8e83470-7d61-4906-9351-b93815bd1c72\") " pod="metallb-system/metallb-operator-webhook-server-76c96b8575-pxtsl" Jan 30 06:54:58 crc kubenswrapper[4520]: I0130 06:54:58.921047 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c8e83470-7d61-4906-9351-b93815bd1c72-webhook-cert\") pod \"metallb-operator-webhook-server-76c96b8575-pxtsl\" (UID: \"c8e83470-7d61-4906-9351-b93815bd1c72\") " pod="metallb-system/metallb-operator-webhook-server-76c96b8575-pxtsl" Jan 30 06:54:58 crc kubenswrapper[4520]: I0130 06:54:58.921079 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shbvt\" (UniqueName: \"kubernetes.io/projected/c8e83470-7d61-4906-9351-b93815bd1c72-kube-api-access-shbvt\") pod \"metallb-operator-webhook-server-76c96b8575-pxtsl\" (UID: \"c8e83470-7d61-4906-9351-b93815bd1c72\") " pod="metallb-system/metallb-operator-webhook-server-76c96b8575-pxtsl" Jan 30 06:54:59 crc kubenswrapper[4520]: I0130 06:54:59.025585 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c8e83470-7d61-4906-9351-b93815bd1c72-apiservice-cert\") pod \"metallb-operator-webhook-server-76c96b8575-pxtsl\" (UID: \"c8e83470-7d61-4906-9351-b93815bd1c72\") " pod="metallb-system/metallb-operator-webhook-server-76c96b8575-pxtsl" Jan 30 06:54:59 crc kubenswrapper[4520]: I0130 06:54:59.025640 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c8e83470-7d61-4906-9351-b93815bd1c72-webhook-cert\") pod \"metallb-operator-webhook-server-76c96b8575-pxtsl\" (UID: \"c8e83470-7d61-4906-9351-b93815bd1c72\") " pod="metallb-system/metallb-operator-webhook-server-76c96b8575-pxtsl" Jan 30 06:54:59 crc kubenswrapper[4520]: I0130 06:54:59.025674 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shbvt\" (UniqueName: \"kubernetes.io/projected/c8e83470-7d61-4906-9351-b93815bd1c72-kube-api-access-shbvt\") pod \"metallb-operator-webhook-server-76c96b8575-pxtsl\" (UID: \"c8e83470-7d61-4906-9351-b93815bd1c72\") " pod="metallb-system/metallb-operator-webhook-server-76c96b8575-pxtsl" Jan 30 06:54:59 crc kubenswrapper[4520]: I0130 06:54:59.029638 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c8e83470-7d61-4906-9351-b93815bd1c72-apiservice-cert\") pod \"metallb-operator-webhook-server-76c96b8575-pxtsl\" (UID: \"c8e83470-7d61-4906-9351-b93815bd1c72\") " pod="metallb-system/metallb-operator-webhook-server-76c96b8575-pxtsl" Jan 30 06:54:59 crc kubenswrapper[4520]: I0130 06:54:59.032724 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c8e83470-7d61-4906-9351-b93815bd1c72-webhook-cert\") pod \"metallb-operator-webhook-server-76c96b8575-pxtsl\" (UID: \"c8e83470-7d61-4906-9351-b93815bd1c72\") " pod="metallb-system/metallb-operator-webhook-server-76c96b8575-pxtsl" Jan 30 06:54:59 crc kubenswrapper[4520]: I0130 06:54:59.049586 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shbvt\" (UniqueName: \"kubernetes.io/projected/c8e83470-7d61-4906-9351-b93815bd1c72-kube-api-access-shbvt\") pod \"metallb-operator-webhook-server-76c96b8575-pxtsl\" (UID: \"c8e83470-7d61-4906-9351-b93815bd1c72\") " pod="metallb-system/metallb-operator-webhook-server-76c96b8575-pxtsl" Jan 30 06:54:59 crc kubenswrapper[4520]: I0130 06:54:59.103338 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-55c6c66c46-npg7d"] Jan 30 06:54:59 crc kubenswrapper[4520]: I0130 06:54:59.217116 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-76c96b8575-pxtsl" Jan 30 06:54:59 crc kubenswrapper[4520]: I0130 06:54:59.266568 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-55c6c66c46-npg7d" event={"ID":"8d27dd60-76c2-4896-bd08-736553ae31fb","Type":"ContainerStarted","Data":"cf4073842a8167451f2942fee7b2dacdcb60094473d7df6484cd09d911a2630a"} Jan 30 06:54:59 crc kubenswrapper[4520]: I0130 06:54:59.405471 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-76c96b8575-pxtsl"] Jan 30 06:54:59 crc kubenswrapper[4520]: W0130 06:54:59.415574 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8e83470_7d61_4906_9351_b93815bd1c72.slice/crio-ff27cb85a6339ba71ce8216188510cd30368577c1f4a56d489846fb43f3aea20 WatchSource:0}: Error finding container ff27cb85a6339ba71ce8216188510cd30368577c1f4a56d489846fb43f3aea20: Status 404 returned error can't find the container with id ff27cb85a6339ba71ce8216188510cd30368577c1f4a56d489846fb43f3aea20 Jan 30 06:55:00 crc kubenswrapper[4520]: I0130 06:55:00.274187 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-76c96b8575-pxtsl" event={"ID":"c8e83470-7d61-4906-9351-b93815bd1c72","Type":"ContainerStarted","Data":"ff27cb85a6339ba71ce8216188510cd30368577c1f4a56d489846fb43f3aea20"} Jan 30 06:55:04 crc kubenswrapper[4520]: I0130 06:55:04.302472 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-55c6c66c46-npg7d" event={"ID":"8d27dd60-76c2-4896-bd08-736553ae31fb","Type":"ContainerStarted","Data":"976a470d4bb5f250faa803479b4ea9ade0bae42541474734022af6c86a4fad8e"} Jan 30 06:55:04 crc kubenswrapper[4520]: I0130 06:55:04.302941 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-55c6c66c46-npg7d" Jan 30 06:55:04 crc kubenswrapper[4520]: I0130 06:55:04.304497 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-76c96b8575-pxtsl" event={"ID":"c8e83470-7d61-4906-9351-b93815bd1c72","Type":"ContainerStarted","Data":"987a6e18038f21370330633d1417c93816846a0ba1c3d1a8cbef615ab23848fc"} Jan 30 06:55:04 crc kubenswrapper[4520]: I0130 06:55:04.304669 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-76c96b8575-pxtsl" Jan 30 06:55:04 crc kubenswrapper[4520]: I0130 06:55:04.327475 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-55c6c66c46-npg7d" podStartSLOduration=1.767897682 podStartE2EDuration="6.327455699s" podCreationTimestamp="2026-01-30 06:54:58 +0000 UTC" firstStartedPulling="2026-01-30 06:54:59.111321718 +0000 UTC m=+612.739673899" lastFinishedPulling="2026-01-30 06:55:03.670879736 +0000 UTC m=+617.299231916" observedRunningTime="2026-01-30 06:55:04.319429319 +0000 UTC m=+617.947781500" watchObservedRunningTime="2026-01-30 06:55:04.327455699 +0000 UTC m=+617.955807880" Jan 30 06:55:19 crc kubenswrapper[4520]: I0130 06:55:19.223490 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-76c96b8575-pxtsl" Jan 30 06:55:19 crc kubenswrapper[4520]: I0130 06:55:19.250449 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-76c96b8575-pxtsl" podStartSLOduration=16.982919945 podStartE2EDuration="21.250419507s" podCreationTimestamp="2026-01-30 06:54:58 +0000 UTC" firstStartedPulling="2026-01-30 06:54:59.419215043 +0000 UTC m=+613.047567224" lastFinishedPulling="2026-01-30 06:55:03.686714615 +0000 UTC m=+617.315066786" observedRunningTime="2026-01-30 06:55:04.339159963 +0000 UTC m=+617.967512144" watchObservedRunningTime="2026-01-30 06:55:19.250419507 +0000 UTC m=+632.878771688" Jan 30 06:55:27 crc kubenswrapper[4520]: I0130 06:55:27.794332 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:55:27 crc kubenswrapper[4520]: I0130 06:55:27.795150 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:55:27 crc kubenswrapper[4520]: I0130 06:55:27.795225 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" Jan 30 06:55:27 crc kubenswrapper[4520]: I0130 06:55:27.796209 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"262e0cf10792038e17c9535c842bb850c34802d1edf6585f98c352abd0f2a350"} pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 06:55:27 crc kubenswrapper[4520]: I0130 06:55:27.796286 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" containerID="cri-o://262e0cf10792038e17c9535c842bb850c34802d1edf6585f98c352abd0f2a350" gracePeriod=600 Jan 30 06:55:28 crc kubenswrapper[4520]: I0130 06:55:28.446897 4520 generic.go:334] "Generic (PLEG): container finished" podID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerID="262e0cf10792038e17c9535c842bb850c34802d1edf6585f98c352abd0f2a350" exitCode=0 Jan 30 06:55:28 crc kubenswrapper[4520]: I0130 06:55:28.446944 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerDied","Data":"262e0cf10792038e17c9535c842bb850c34802d1edf6585f98c352abd0f2a350"} Jan 30 06:55:28 crc kubenswrapper[4520]: I0130 06:55:28.447400 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerStarted","Data":"23b7c2584fae4db0c5cd58feba27cd2cddcee2416ca541fef55d331d3df60688"} Jan 30 06:55:28 crc kubenswrapper[4520]: I0130 06:55:28.447423 4520 scope.go:117] "RemoveContainer" containerID="33eb4172918824c12d6f749038eb66206e75b7c9e4ce40339686339e4f47dc36" Jan 30 06:55:38 crc kubenswrapper[4520]: I0130 06:55:38.831423 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-55c6c66c46-npg7d" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.422986 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-ld6tp"] Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.426417 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-ld6tp" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.428492 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-kp8f6"] Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.429808 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kp8f6" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.432048 4520 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.433360 4520 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.436049 4520 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-5v4jx" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.436097 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.446924 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-kp8f6"] Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.530155 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-rr7cw"] Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.531527 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-rr7cw" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.533404 4520 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.533458 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.533751 4520 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-8sztp" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.534845 4520 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.535259 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/440b0b7d-713b-4590-ad35-05fa9d42423a-metrics-certs\") pod \"frr-k8s-ld6tp\" (UID: \"440b0b7d-713b-4590-ad35-05fa9d42423a\") " pod="metallb-system/frr-k8s-ld6tp" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.535311 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcnxt\" (UniqueName: \"kubernetes.io/projected/440b0b7d-713b-4590-ad35-05fa9d42423a-kube-api-access-jcnxt\") pod \"frr-k8s-ld6tp\" (UID: \"440b0b7d-713b-4590-ad35-05fa9d42423a\") " pod="metallb-system/frr-k8s-ld6tp" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.535348 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ab13d5a-1ba0-4181-ae7b-69ed90c1793e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-kp8f6\" (UID: \"6ab13d5a-1ba0-4181-ae7b-69ed90c1793e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kp8f6" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.535600 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/440b0b7d-713b-4590-ad35-05fa9d42423a-reloader\") pod \"frr-k8s-ld6tp\" (UID: \"440b0b7d-713b-4590-ad35-05fa9d42423a\") " pod="metallb-system/frr-k8s-ld6tp" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.535667 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb7qt\" (UniqueName: \"kubernetes.io/projected/6ab13d5a-1ba0-4181-ae7b-69ed90c1793e-kube-api-access-vb7qt\") pod \"frr-k8s-webhook-server-7df86c4f6c-kp8f6\" (UID: \"6ab13d5a-1ba0-4181-ae7b-69ed90c1793e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kp8f6" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.535784 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/440b0b7d-713b-4590-ad35-05fa9d42423a-metrics\") pod \"frr-k8s-ld6tp\" (UID: \"440b0b7d-713b-4590-ad35-05fa9d42423a\") " pod="metallb-system/frr-k8s-ld6tp" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.535869 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/440b0b7d-713b-4590-ad35-05fa9d42423a-frr-conf\") pod \"frr-k8s-ld6tp\" (UID: \"440b0b7d-713b-4590-ad35-05fa9d42423a\") " pod="metallb-system/frr-k8s-ld6tp" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.535928 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/440b0b7d-713b-4590-ad35-05fa9d42423a-frr-startup\") pod \"frr-k8s-ld6tp\" (UID: \"440b0b7d-713b-4590-ad35-05fa9d42423a\") " pod="metallb-system/frr-k8s-ld6tp" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.535958 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/440b0b7d-713b-4590-ad35-05fa9d42423a-frr-sockets\") pod \"frr-k8s-ld6tp\" (UID: \"440b0b7d-713b-4590-ad35-05fa9d42423a\") " pod="metallb-system/frr-k8s-ld6tp" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.549074 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-9n5hv"] Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.550218 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-9n5hv" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.551733 4520 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.575810 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-9n5hv"] Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.637937 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2w9g\" (UniqueName: \"kubernetes.io/projected/2ad2dd3f-550a-483f-84c0-d3c9a7477c5b-kube-api-access-q2w9g\") pod \"speaker-rr7cw\" (UID: \"2ad2dd3f-550a-483f-84c0-d3c9a7477c5b\") " pod="metallb-system/speaker-rr7cw" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.638056 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ab13d5a-1ba0-4181-ae7b-69ed90c1793e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-kp8f6\" (UID: \"6ab13d5a-1ba0-4181-ae7b-69ed90c1793e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kp8f6" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.638138 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz85h\" (UniqueName: \"kubernetes.io/projected/b338bd18-f666-4648-9d7f-325d75b9592a-kube-api-access-sz85h\") pod \"controller-6968d8fdc4-9n5hv\" (UID: \"b338bd18-f666-4648-9d7f-325d75b9592a\") " pod="metallb-system/controller-6968d8fdc4-9n5hv" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.638207 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2ad2dd3f-550a-483f-84c0-d3c9a7477c5b-metallb-excludel2\") pod \"speaker-rr7cw\" (UID: \"2ad2dd3f-550a-483f-84c0-d3c9a7477c5b\") " pod="metallb-system/speaker-rr7cw" Jan 30 06:55:39 crc kubenswrapper[4520]: E0130 06:55:39.638268 4520 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.638333 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/440b0b7d-713b-4590-ad35-05fa9d42423a-reloader\") pod \"frr-k8s-ld6tp\" (UID: \"440b0b7d-713b-4590-ad35-05fa9d42423a\") " pod="metallb-system/frr-k8s-ld6tp" Jan 30 06:55:39 crc kubenswrapper[4520]: E0130 06:55:39.638361 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ab13d5a-1ba0-4181-ae7b-69ed90c1793e-cert podName:6ab13d5a-1ba0-4181-ae7b-69ed90c1793e nodeName:}" failed. No retries permitted until 2026-01-30 06:55:40.138329652 +0000 UTC m=+653.766681834 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6ab13d5a-1ba0-4181-ae7b-69ed90c1793e-cert") pod "frr-k8s-webhook-server-7df86c4f6c-kp8f6" (UID: "6ab13d5a-1ba0-4181-ae7b-69ed90c1793e") : secret "frr-k8s-webhook-server-cert" not found Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.638423 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb7qt\" (UniqueName: \"kubernetes.io/projected/6ab13d5a-1ba0-4181-ae7b-69ed90c1793e-kube-api-access-vb7qt\") pod \"frr-k8s-webhook-server-7df86c4f6c-kp8f6\" (UID: \"6ab13d5a-1ba0-4181-ae7b-69ed90c1793e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kp8f6" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.638455 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2ad2dd3f-550a-483f-84c0-d3c9a7477c5b-memberlist\") pod \"speaker-rr7cw\" (UID: \"2ad2dd3f-550a-483f-84c0-d3c9a7477c5b\") " pod="metallb-system/speaker-rr7cw" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.638555 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/440b0b7d-713b-4590-ad35-05fa9d42423a-metrics\") pod \"frr-k8s-ld6tp\" (UID: \"440b0b7d-713b-4590-ad35-05fa9d42423a\") " pod="metallb-system/frr-k8s-ld6tp" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.638636 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/440b0b7d-713b-4590-ad35-05fa9d42423a-frr-conf\") pod \"frr-k8s-ld6tp\" (UID: \"440b0b7d-713b-4590-ad35-05fa9d42423a\") " pod="metallb-system/frr-k8s-ld6tp" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.638694 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/440b0b7d-713b-4590-ad35-05fa9d42423a-frr-startup\") pod \"frr-k8s-ld6tp\" (UID: \"440b0b7d-713b-4590-ad35-05fa9d42423a\") " pod="metallb-system/frr-k8s-ld6tp" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.638731 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/440b0b7d-713b-4590-ad35-05fa9d42423a-frr-sockets\") pod \"frr-k8s-ld6tp\" (UID: \"440b0b7d-713b-4590-ad35-05fa9d42423a\") " pod="metallb-system/frr-k8s-ld6tp" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.638750 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b338bd18-f666-4648-9d7f-325d75b9592a-cert\") pod \"controller-6968d8fdc4-9n5hv\" (UID: \"b338bd18-f666-4648-9d7f-325d75b9592a\") " pod="metallb-system/controller-6968d8fdc4-9n5hv" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.638780 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ad2dd3f-550a-483f-84c0-d3c9a7477c5b-metrics-certs\") pod \"speaker-rr7cw\" (UID: \"2ad2dd3f-550a-483f-84c0-d3c9a7477c5b\") " pod="metallb-system/speaker-rr7cw" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.638792 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/440b0b7d-713b-4590-ad35-05fa9d42423a-reloader\") pod \"frr-k8s-ld6tp\" (UID: \"440b0b7d-713b-4590-ad35-05fa9d42423a\") " pod="metallb-system/frr-k8s-ld6tp" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.638813 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/440b0b7d-713b-4590-ad35-05fa9d42423a-metrics-certs\") pod \"frr-k8s-ld6tp\" (UID: \"440b0b7d-713b-4590-ad35-05fa9d42423a\") " pod="metallb-system/frr-k8s-ld6tp" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.638928 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b338bd18-f666-4648-9d7f-325d75b9592a-metrics-certs\") pod \"controller-6968d8fdc4-9n5hv\" (UID: \"b338bd18-f666-4648-9d7f-325d75b9592a\") " pod="metallb-system/controller-6968d8fdc4-9n5hv" Jan 30 06:55:39 crc kubenswrapper[4520]: E0130 06:55:39.638944 4520 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.638991 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcnxt\" (UniqueName: \"kubernetes.io/projected/440b0b7d-713b-4590-ad35-05fa9d42423a-kube-api-access-jcnxt\") pod \"frr-k8s-ld6tp\" (UID: \"440b0b7d-713b-4590-ad35-05fa9d42423a\") " pod="metallb-system/frr-k8s-ld6tp" Jan 30 06:55:39 crc kubenswrapper[4520]: E0130 06:55:39.638998 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/440b0b7d-713b-4590-ad35-05fa9d42423a-metrics-certs podName:440b0b7d-713b-4590-ad35-05fa9d42423a nodeName:}" failed. No retries permitted until 2026-01-30 06:55:40.138981918 +0000 UTC m=+653.767334100 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/440b0b7d-713b-4590-ad35-05fa9d42423a-metrics-certs") pod "frr-k8s-ld6tp" (UID: "440b0b7d-713b-4590-ad35-05fa9d42423a") : secret "frr-k8s-certs-secret" not found Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.639370 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/440b0b7d-713b-4590-ad35-05fa9d42423a-metrics\") pod \"frr-k8s-ld6tp\" (UID: \"440b0b7d-713b-4590-ad35-05fa9d42423a\") " pod="metallb-system/frr-k8s-ld6tp" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.639380 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/440b0b7d-713b-4590-ad35-05fa9d42423a-frr-conf\") pod \"frr-k8s-ld6tp\" (UID: \"440b0b7d-713b-4590-ad35-05fa9d42423a\") " pod="metallb-system/frr-k8s-ld6tp" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.639412 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/440b0b7d-713b-4590-ad35-05fa9d42423a-frr-sockets\") pod \"frr-k8s-ld6tp\" (UID: \"440b0b7d-713b-4590-ad35-05fa9d42423a\") " pod="metallb-system/frr-k8s-ld6tp" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.639962 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/440b0b7d-713b-4590-ad35-05fa9d42423a-frr-startup\") pod \"frr-k8s-ld6tp\" (UID: \"440b0b7d-713b-4590-ad35-05fa9d42423a\") " pod="metallb-system/frr-k8s-ld6tp" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.660188 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcnxt\" (UniqueName: \"kubernetes.io/projected/440b0b7d-713b-4590-ad35-05fa9d42423a-kube-api-access-jcnxt\") pod \"frr-k8s-ld6tp\" (UID: \"440b0b7d-713b-4590-ad35-05fa9d42423a\") " pod="metallb-system/frr-k8s-ld6tp" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.667133 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb7qt\" (UniqueName: \"kubernetes.io/projected/6ab13d5a-1ba0-4181-ae7b-69ed90c1793e-kube-api-access-vb7qt\") pod \"frr-k8s-webhook-server-7df86c4f6c-kp8f6\" (UID: \"6ab13d5a-1ba0-4181-ae7b-69ed90c1793e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kp8f6" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.741180 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b338bd18-f666-4648-9d7f-325d75b9592a-cert\") pod \"controller-6968d8fdc4-9n5hv\" (UID: \"b338bd18-f666-4648-9d7f-325d75b9592a\") " pod="metallb-system/controller-6968d8fdc4-9n5hv" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.741230 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ad2dd3f-550a-483f-84c0-d3c9a7477c5b-metrics-certs\") pod \"speaker-rr7cw\" (UID: \"2ad2dd3f-550a-483f-84c0-d3c9a7477c5b\") " pod="metallb-system/speaker-rr7cw" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.741273 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b338bd18-f666-4648-9d7f-325d75b9592a-metrics-certs\") pod \"controller-6968d8fdc4-9n5hv\" (UID: \"b338bd18-f666-4648-9d7f-325d75b9592a\") " pod="metallb-system/controller-6968d8fdc4-9n5hv" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.741303 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2w9g\" (UniqueName: \"kubernetes.io/projected/2ad2dd3f-550a-483f-84c0-d3c9a7477c5b-kube-api-access-q2w9g\") pod \"speaker-rr7cw\" (UID: \"2ad2dd3f-550a-483f-84c0-d3c9a7477c5b\") " pod="metallb-system/speaker-rr7cw" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.741354 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz85h\" (UniqueName: \"kubernetes.io/projected/b338bd18-f666-4648-9d7f-325d75b9592a-kube-api-access-sz85h\") pod \"controller-6968d8fdc4-9n5hv\" (UID: \"b338bd18-f666-4648-9d7f-325d75b9592a\") " pod="metallb-system/controller-6968d8fdc4-9n5hv" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.741383 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2ad2dd3f-550a-483f-84c0-d3c9a7477c5b-metallb-excludel2\") pod \"speaker-rr7cw\" (UID: \"2ad2dd3f-550a-483f-84c0-d3c9a7477c5b\") " pod="metallb-system/speaker-rr7cw" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.741426 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2ad2dd3f-550a-483f-84c0-d3c9a7477c5b-memberlist\") pod \"speaker-rr7cw\" (UID: \"2ad2dd3f-550a-483f-84c0-d3c9a7477c5b\") " pod="metallb-system/speaker-rr7cw" Jan 30 06:55:39 crc kubenswrapper[4520]: E0130 06:55:39.741608 4520 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 06:55:39 crc kubenswrapper[4520]: E0130 06:55:39.741662 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ad2dd3f-550a-483f-84c0-d3c9a7477c5b-memberlist podName:2ad2dd3f-550a-483f-84c0-d3c9a7477c5b nodeName:}" failed. No retries permitted until 2026-01-30 06:55:40.24164554 +0000 UTC m=+653.869997721 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/2ad2dd3f-550a-483f-84c0-d3c9a7477c5b-memberlist") pod "speaker-rr7cw" (UID: "2ad2dd3f-550a-483f-84c0-d3c9a7477c5b") : secret "metallb-memberlist" not found Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.742918 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2ad2dd3f-550a-483f-84c0-d3c9a7477c5b-metallb-excludel2\") pod \"speaker-rr7cw\" (UID: \"2ad2dd3f-550a-483f-84c0-d3c9a7477c5b\") " pod="metallb-system/speaker-rr7cw" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.745457 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b338bd18-f666-4648-9d7f-325d75b9592a-metrics-certs\") pod \"controller-6968d8fdc4-9n5hv\" (UID: \"b338bd18-f666-4648-9d7f-325d75b9592a\") " pod="metallb-system/controller-6968d8fdc4-9n5hv" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.745906 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ad2dd3f-550a-483f-84c0-d3c9a7477c5b-metrics-certs\") pod \"speaker-rr7cw\" (UID: \"2ad2dd3f-550a-483f-84c0-d3c9a7477c5b\") " pod="metallb-system/speaker-rr7cw" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.756837 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz85h\" (UniqueName: \"kubernetes.io/projected/b338bd18-f666-4648-9d7f-325d75b9592a-kube-api-access-sz85h\") pod \"controller-6968d8fdc4-9n5hv\" (UID: \"b338bd18-f666-4648-9d7f-325d75b9592a\") " pod="metallb-system/controller-6968d8fdc4-9n5hv" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.758076 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b338bd18-f666-4648-9d7f-325d75b9592a-cert\") pod \"controller-6968d8fdc4-9n5hv\" (UID: \"b338bd18-f666-4648-9d7f-325d75b9592a\") " pod="metallb-system/controller-6968d8fdc4-9n5hv" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.758541 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2w9g\" (UniqueName: \"kubernetes.io/projected/2ad2dd3f-550a-483f-84c0-d3c9a7477c5b-kube-api-access-q2w9g\") pod \"speaker-rr7cw\" (UID: \"2ad2dd3f-550a-483f-84c0-d3c9a7477c5b\") " pod="metallb-system/speaker-rr7cw" Jan 30 06:55:39 crc kubenswrapper[4520]: I0130 06:55:39.860867 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-9n5hv" Jan 30 06:55:40 crc kubenswrapper[4520]: I0130 06:55:40.145829 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ab13d5a-1ba0-4181-ae7b-69ed90c1793e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-kp8f6\" (UID: \"6ab13d5a-1ba0-4181-ae7b-69ed90c1793e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kp8f6" Jan 30 06:55:40 crc kubenswrapper[4520]: I0130 06:55:40.146149 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/440b0b7d-713b-4590-ad35-05fa9d42423a-metrics-certs\") pod \"frr-k8s-ld6tp\" (UID: \"440b0b7d-713b-4590-ad35-05fa9d42423a\") " pod="metallb-system/frr-k8s-ld6tp" Jan 30 06:55:40 crc kubenswrapper[4520]: I0130 06:55:40.149554 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ab13d5a-1ba0-4181-ae7b-69ed90c1793e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-kp8f6\" (UID: \"6ab13d5a-1ba0-4181-ae7b-69ed90c1793e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kp8f6" Jan 30 06:55:40 crc kubenswrapper[4520]: I0130 06:55:40.150483 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/440b0b7d-713b-4590-ad35-05fa9d42423a-metrics-certs\") pod \"frr-k8s-ld6tp\" (UID: \"440b0b7d-713b-4590-ad35-05fa9d42423a\") " pod="metallb-system/frr-k8s-ld6tp" Jan 30 06:55:40 crc kubenswrapper[4520]: I0130 06:55:40.247047 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2ad2dd3f-550a-483f-84c0-d3c9a7477c5b-memberlist\") pod \"speaker-rr7cw\" (UID: \"2ad2dd3f-550a-483f-84c0-d3c9a7477c5b\") " pod="metallb-system/speaker-rr7cw" Jan 30 06:55:40 crc kubenswrapper[4520]: E0130 06:55:40.247245 4520 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 06:55:40 crc kubenswrapper[4520]: E0130 06:55:40.247320 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ad2dd3f-550a-483f-84c0-d3c9a7477c5b-memberlist podName:2ad2dd3f-550a-483f-84c0-d3c9a7477c5b nodeName:}" failed. No retries permitted until 2026-01-30 06:55:41.247304257 +0000 UTC m=+654.875656439 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/2ad2dd3f-550a-483f-84c0-d3c9a7477c5b-memberlist") pod "speaker-rr7cw" (UID: "2ad2dd3f-550a-483f-84c0-d3c9a7477c5b") : secret "metallb-memberlist" not found Jan 30 06:55:40 crc kubenswrapper[4520]: I0130 06:55:40.263278 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-9n5hv"] Jan 30 06:55:40 crc kubenswrapper[4520]: I0130 06:55:40.346678 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-ld6tp" Jan 30 06:55:40 crc kubenswrapper[4520]: I0130 06:55:40.352806 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kp8f6" Jan 30 06:55:40 crc kubenswrapper[4520]: I0130 06:55:40.533424 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ld6tp" event={"ID":"440b0b7d-713b-4590-ad35-05fa9d42423a","Type":"ContainerStarted","Data":"a6f27c592ddef9b66b370977492d57ab3e31cd827c488d11e675140bed1fa949"} Jan 30 06:55:40 crc kubenswrapper[4520]: I0130 06:55:40.535119 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-9n5hv" event={"ID":"b338bd18-f666-4648-9d7f-325d75b9592a","Type":"ContainerStarted","Data":"f9cf768433f956dcaa138b467a34f13e9c6f9f443aa37f1a61c76ef1de7bc78c"} Jan 30 06:55:40 crc kubenswrapper[4520]: I0130 06:55:40.535158 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-9n5hv" event={"ID":"b338bd18-f666-4648-9d7f-325d75b9592a","Type":"ContainerStarted","Data":"70dae541a6d8c35f47acf939f10a85da287ff03acc5b8b13552b38765f71cc5d"} Jan 30 06:55:40 crc kubenswrapper[4520]: I0130 06:55:40.535170 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-9n5hv" event={"ID":"b338bd18-f666-4648-9d7f-325d75b9592a","Type":"ContainerStarted","Data":"6d4a7e9d01a1481cb5d4ae39f84e9cd3b9aff8c8d054b33afc4878b443b14415"} Jan 30 06:55:40 crc kubenswrapper[4520]: I0130 06:55:40.535227 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-9n5hv" Jan 30 06:55:40 crc kubenswrapper[4520]: I0130 06:55:40.542446 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-kp8f6"] Jan 30 06:55:40 crc kubenswrapper[4520]: W0130 06:55:40.548241 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ab13d5a_1ba0_4181_ae7b_69ed90c1793e.slice/crio-ff56388dffdd6d567cf87c1b75c9b72a18af16b883e80d0b6a9f4e533e17eef1 WatchSource:0}: Error finding container ff56388dffdd6d567cf87c1b75c9b72a18af16b883e80d0b6a9f4e533e17eef1: Status 404 returned error can't find the container with id ff56388dffdd6d567cf87c1b75c9b72a18af16b883e80d0b6a9f4e533e17eef1 Jan 30 06:55:40 crc kubenswrapper[4520]: I0130 06:55:40.554030 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-9n5hv" podStartSLOduration=1.5540183239999998 podStartE2EDuration="1.554018324s" podCreationTimestamp="2026-01-30 06:55:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:55:40.550192081 +0000 UTC m=+654.178544263" watchObservedRunningTime="2026-01-30 06:55:40.554018324 +0000 UTC m=+654.182370505" Jan 30 06:55:41 crc kubenswrapper[4520]: I0130 06:55:41.259780 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2ad2dd3f-550a-483f-84c0-d3c9a7477c5b-memberlist\") pod \"speaker-rr7cw\" (UID: \"2ad2dd3f-550a-483f-84c0-d3c9a7477c5b\") " pod="metallb-system/speaker-rr7cw" Jan 30 06:55:41 crc kubenswrapper[4520]: I0130 06:55:41.266117 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2ad2dd3f-550a-483f-84c0-d3c9a7477c5b-memberlist\") pod \"speaker-rr7cw\" (UID: \"2ad2dd3f-550a-483f-84c0-d3c9a7477c5b\") " pod="metallb-system/speaker-rr7cw" Jan 30 06:55:41 crc kubenswrapper[4520]: I0130 06:55:41.348495 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-rr7cw" Jan 30 06:55:41 crc kubenswrapper[4520]: W0130 06:55:41.369223 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ad2dd3f_550a_483f_84c0_d3c9a7477c5b.slice/crio-f70ac66716fd2fd1c2a28524232b834da543e0ebd7010e07cb28786626fa2c51 WatchSource:0}: Error finding container f70ac66716fd2fd1c2a28524232b834da543e0ebd7010e07cb28786626fa2c51: Status 404 returned error can't find the container with id f70ac66716fd2fd1c2a28524232b834da543e0ebd7010e07cb28786626fa2c51 Jan 30 06:55:41 crc kubenswrapper[4520]: I0130 06:55:41.543774 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-rr7cw" event={"ID":"2ad2dd3f-550a-483f-84c0-d3c9a7477c5b","Type":"ContainerStarted","Data":"f70ac66716fd2fd1c2a28524232b834da543e0ebd7010e07cb28786626fa2c51"} Jan 30 06:55:41 crc kubenswrapper[4520]: I0130 06:55:41.545295 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kp8f6" event={"ID":"6ab13d5a-1ba0-4181-ae7b-69ed90c1793e","Type":"ContainerStarted","Data":"ff56388dffdd6d567cf87c1b75c9b72a18af16b883e80d0b6a9f4e533e17eef1"} Jan 30 06:55:42 crc kubenswrapper[4520]: I0130 06:55:42.555631 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-rr7cw" event={"ID":"2ad2dd3f-550a-483f-84c0-d3c9a7477c5b","Type":"ContainerStarted","Data":"49f500e72dfb5f22058f934500dabb25c656673b7129d0ce2af30114344a8af5"} Jan 30 06:55:42 crc kubenswrapper[4520]: I0130 06:55:42.555944 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-rr7cw" event={"ID":"2ad2dd3f-550a-483f-84c0-d3c9a7477c5b","Type":"ContainerStarted","Data":"1af764f8f45d125d8b8aea1c00ff1d26ad1357c00f9221a98110a5bff96baf31"} Jan 30 06:55:42 crc kubenswrapper[4520]: I0130 06:55:42.555962 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-rr7cw" Jan 30 06:55:42 crc kubenswrapper[4520]: I0130 06:55:42.573902 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-rr7cw" podStartSLOduration=3.573892073 podStartE2EDuration="3.573892073s" podCreationTimestamp="2026-01-30 06:55:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:55:42.570896461 +0000 UTC m=+656.199248643" watchObservedRunningTime="2026-01-30 06:55:42.573892073 +0000 UTC m=+656.202244254" Jan 30 06:55:47 crc kubenswrapper[4520]: I0130 06:55:47.589254 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kp8f6" event={"ID":"6ab13d5a-1ba0-4181-ae7b-69ed90c1793e","Type":"ContainerStarted","Data":"9ccdc46d3acff15965d0eafde80630de8e9eaaff5bd00761e0708eea49b1e902"} Jan 30 06:55:47 crc kubenswrapper[4520]: I0130 06:55:47.589705 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kp8f6" Jan 30 06:55:47 crc kubenswrapper[4520]: I0130 06:55:47.591360 4520 generic.go:334] "Generic (PLEG): container finished" podID="440b0b7d-713b-4590-ad35-05fa9d42423a" containerID="b8407be47485b53bc34028862476f05427eca2ce1c082fe508b180e2eff202be" exitCode=0 Jan 30 06:55:47 crc kubenswrapper[4520]: I0130 06:55:47.591402 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ld6tp" event={"ID":"440b0b7d-713b-4590-ad35-05fa9d42423a","Type":"ContainerDied","Data":"b8407be47485b53bc34028862476f05427eca2ce1c082fe508b180e2eff202be"} Jan 30 06:55:47 crc kubenswrapper[4520]: I0130 06:55:47.600924 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kp8f6" podStartSLOduration=1.737129875 podStartE2EDuration="8.600914324s" podCreationTimestamp="2026-01-30 06:55:39 +0000 UTC" firstStartedPulling="2026-01-30 06:55:40.550562949 +0000 UTC m=+654.178915130" lastFinishedPulling="2026-01-30 06:55:47.414347398 +0000 UTC m=+661.042699579" observedRunningTime="2026-01-30 06:55:47.599543617 +0000 UTC m=+661.227895798" watchObservedRunningTime="2026-01-30 06:55:47.600914324 +0000 UTC m=+661.229266495" Jan 30 06:55:48 crc kubenswrapper[4520]: I0130 06:55:48.601242 4520 generic.go:334] "Generic (PLEG): container finished" podID="440b0b7d-713b-4590-ad35-05fa9d42423a" containerID="fbae2a95404c3554e65ac1e4b6acbfd0c31e91903f26975e320c50ad6d96737a" exitCode=0 Jan 30 06:55:48 crc kubenswrapper[4520]: I0130 06:55:48.601324 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ld6tp" event={"ID":"440b0b7d-713b-4590-ad35-05fa9d42423a","Type":"ContainerDied","Data":"fbae2a95404c3554e65ac1e4b6acbfd0c31e91903f26975e320c50ad6d96737a"} Jan 30 06:55:49 crc kubenswrapper[4520]: I0130 06:55:49.610102 4520 generic.go:334] "Generic (PLEG): container finished" podID="440b0b7d-713b-4590-ad35-05fa9d42423a" containerID="1344bedea2b3c388f030d123f4f7d78cfd092970e53c2719d64dd0b0064b282f" exitCode=0 Jan 30 06:55:49 crc kubenswrapper[4520]: I0130 06:55:49.610191 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ld6tp" event={"ID":"440b0b7d-713b-4590-ad35-05fa9d42423a","Type":"ContainerDied","Data":"1344bedea2b3c388f030d123f4f7d78cfd092970e53c2719d64dd0b0064b282f"} Jan 30 06:55:50 crc kubenswrapper[4520]: I0130 06:55:50.619850 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ld6tp" event={"ID":"440b0b7d-713b-4590-ad35-05fa9d42423a","Type":"ContainerStarted","Data":"bbb6ea4acb5fd0d140f1b35573d50fc1286942e650490d5fe18071918d97ae18"} Jan 30 06:55:50 crc kubenswrapper[4520]: I0130 06:55:50.620899 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-ld6tp" Jan 30 06:55:50 crc kubenswrapper[4520]: I0130 06:55:50.620966 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ld6tp" event={"ID":"440b0b7d-713b-4590-ad35-05fa9d42423a","Type":"ContainerStarted","Data":"bedd2ae6fde9b3b08e5d115b456ca2a8c66da07e6e302076076887423316f005"} Jan 30 06:55:50 crc kubenswrapper[4520]: I0130 06:55:50.620999 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ld6tp" event={"ID":"440b0b7d-713b-4590-ad35-05fa9d42423a","Type":"ContainerStarted","Data":"f2806ccb85a334ffd66f303b0054d64598071e3bf4dc0ee1084858eacd8f0b81"} Jan 30 06:55:50 crc kubenswrapper[4520]: I0130 06:55:50.621018 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ld6tp" event={"ID":"440b0b7d-713b-4590-ad35-05fa9d42423a","Type":"ContainerStarted","Data":"1ca8125e893768a5cd04312cc6ce11e51bcc5e45ddbd78c6f5599150c266a2da"} Jan 30 06:55:50 crc kubenswrapper[4520]: I0130 06:55:50.621036 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ld6tp" event={"ID":"440b0b7d-713b-4590-ad35-05fa9d42423a","Type":"ContainerStarted","Data":"25db906ed138ad8b810f14764be335cf5189012dd21914199e259c7e791b13d8"} Jan 30 06:55:50 crc kubenswrapper[4520]: I0130 06:55:50.621052 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ld6tp" event={"ID":"440b0b7d-713b-4590-ad35-05fa9d42423a","Type":"ContainerStarted","Data":"7f4f7eac5e3b47ce2d2e52163c44d18deddb3460b4d48161275af5149bbef8c1"} Jan 30 06:55:50 crc kubenswrapper[4520]: I0130 06:55:50.642627 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-ld6tp" podStartSLOduration=4.671450367 podStartE2EDuration="11.642606034s" podCreationTimestamp="2026-01-30 06:55:39 +0000 UTC" firstStartedPulling="2026-01-30 06:55:40.457882053 +0000 UTC m=+654.086234235" lastFinishedPulling="2026-01-30 06:55:47.429037721 +0000 UTC m=+661.057389902" observedRunningTime="2026-01-30 06:55:50.639500486 +0000 UTC m=+664.267852668" watchObservedRunningTime="2026-01-30 06:55:50.642606034 +0000 UTC m=+664.270958214" Jan 30 06:55:51 crc kubenswrapper[4520]: I0130 06:55:51.352443 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-rr7cw" Jan 30 06:55:53 crc kubenswrapper[4520]: I0130 06:55:53.489357 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-9dkdm"] Jan 30 06:55:53 crc kubenswrapper[4520]: I0130 06:55:53.490655 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9dkdm" Jan 30 06:55:53 crc kubenswrapper[4520]: I0130 06:55:53.492809 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-v45jj" Jan 30 06:55:53 crc kubenswrapper[4520]: I0130 06:55:53.496585 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 30 06:55:53 crc kubenswrapper[4520]: I0130 06:55:53.509091 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 30 06:55:53 crc kubenswrapper[4520]: I0130 06:55:53.528252 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9dkdm"] Jan 30 06:55:53 crc kubenswrapper[4520]: I0130 06:55:53.645430 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd4p2\" (UniqueName: \"kubernetes.io/projected/793b9300-4e74-438c-94fa-40f43f962226-kube-api-access-gd4p2\") pod \"openstack-operator-index-9dkdm\" (UID: \"793b9300-4e74-438c-94fa-40f43f962226\") " pod="openstack-operators/openstack-operator-index-9dkdm" Jan 30 06:55:53 crc kubenswrapper[4520]: I0130 06:55:53.747525 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd4p2\" (UniqueName: \"kubernetes.io/projected/793b9300-4e74-438c-94fa-40f43f962226-kube-api-access-gd4p2\") pod \"openstack-operator-index-9dkdm\" (UID: \"793b9300-4e74-438c-94fa-40f43f962226\") " pod="openstack-operators/openstack-operator-index-9dkdm" Jan 30 06:55:53 crc kubenswrapper[4520]: I0130 06:55:53.765795 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd4p2\" (UniqueName: \"kubernetes.io/projected/793b9300-4e74-438c-94fa-40f43f962226-kube-api-access-gd4p2\") pod \"openstack-operator-index-9dkdm\" (UID: \"793b9300-4e74-438c-94fa-40f43f962226\") " pod="openstack-operators/openstack-operator-index-9dkdm" Jan 30 06:55:53 crc kubenswrapper[4520]: I0130 06:55:53.811183 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9dkdm" Jan 30 06:55:54 crc kubenswrapper[4520]: I0130 06:55:54.012455 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9dkdm"] Jan 30 06:55:54 crc kubenswrapper[4520]: W0130 06:55:54.017034 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod793b9300_4e74_438c_94fa_40f43f962226.slice/crio-a5658a7e9c97dd4eb8e88c0f596b5fde217969b8a4fed706f4abb7f4379edfb4 WatchSource:0}: Error finding container a5658a7e9c97dd4eb8e88c0f596b5fde217969b8a4fed706f4abb7f4379edfb4: Status 404 returned error can't find the container with id a5658a7e9c97dd4eb8e88c0f596b5fde217969b8a4fed706f4abb7f4379edfb4 Jan 30 06:55:54 crc kubenswrapper[4520]: I0130 06:55:54.656392 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9dkdm" event={"ID":"793b9300-4e74-438c-94fa-40f43f962226","Type":"ContainerStarted","Data":"a5658a7e9c97dd4eb8e88c0f596b5fde217969b8a4fed706f4abb7f4379edfb4"} Jan 30 06:55:55 crc kubenswrapper[4520]: I0130 06:55:55.347543 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-ld6tp" Jan 30 06:55:55 crc kubenswrapper[4520]: I0130 06:55:55.377067 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-ld6tp" Jan 30 06:55:55 crc kubenswrapper[4520]: I0130 06:55:55.666367 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9dkdm" event={"ID":"793b9300-4e74-438c-94fa-40f43f962226","Type":"ContainerStarted","Data":"d8514dfd8707c0fc4b0f7a5667d98dc602e1f974c56083307ac9f53b75098137"} Jan 30 06:55:55 crc kubenswrapper[4520]: I0130 06:55:55.686153 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-9dkdm" podStartSLOduration=1.820981489 podStartE2EDuration="2.686128529s" podCreationTimestamp="2026-01-30 06:55:53 +0000 UTC" firstStartedPulling="2026-01-30 06:55:54.020133208 +0000 UTC m=+667.648485389" lastFinishedPulling="2026-01-30 06:55:54.885280247 +0000 UTC m=+668.513632429" observedRunningTime="2026-01-30 06:55:55.683888288 +0000 UTC m=+669.312240469" watchObservedRunningTime="2026-01-30 06:55:55.686128529 +0000 UTC m=+669.314480710" Jan 30 06:55:56 crc kubenswrapper[4520]: I0130 06:55:56.665120 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-9dkdm"] Jan 30 06:55:57 crc kubenswrapper[4520]: I0130 06:55:57.273778 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-xdqs6"] Jan 30 06:55:57 crc kubenswrapper[4520]: I0130 06:55:57.274951 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xdqs6" Jan 30 06:55:57 crc kubenswrapper[4520]: I0130 06:55:57.290388 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xdqs6"] Jan 30 06:55:57 crc kubenswrapper[4520]: I0130 06:55:57.404543 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl27l\" (UniqueName: \"kubernetes.io/projected/7d581456-1ad4-4ae7-90c6-00b61382b16a-kube-api-access-sl27l\") pod \"openstack-operator-index-xdqs6\" (UID: \"7d581456-1ad4-4ae7-90c6-00b61382b16a\") " pod="openstack-operators/openstack-operator-index-xdqs6" Jan 30 06:55:57 crc kubenswrapper[4520]: I0130 06:55:57.505313 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl27l\" (UniqueName: \"kubernetes.io/projected/7d581456-1ad4-4ae7-90c6-00b61382b16a-kube-api-access-sl27l\") pod \"openstack-operator-index-xdqs6\" (UID: \"7d581456-1ad4-4ae7-90c6-00b61382b16a\") " pod="openstack-operators/openstack-operator-index-xdqs6" Jan 30 06:55:57 crc kubenswrapper[4520]: I0130 06:55:57.523275 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl27l\" (UniqueName: \"kubernetes.io/projected/7d581456-1ad4-4ae7-90c6-00b61382b16a-kube-api-access-sl27l\") pod \"openstack-operator-index-xdqs6\" (UID: \"7d581456-1ad4-4ae7-90c6-00b61382b16a\") " pod="openstack-operators/openstack-operator-index-xdqs6" Jan 30 06:55:57 crc kubenswrapper[4520]: I0130 06:55:57.597479 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xdqs6" Jan 30 06:55:57 crc kubenswrapper[4520]: I0130 06:55:57.687756 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-9dkdm" podUID="793b9300-4e74-438c-94fa-40f43f962226" containerName="registry-server" containerID="cri-o://d8514dfd8707c0fc4b0f7a5667d98dc602e1f974c56083307ac9f53b75098137" gracePeriod=2 Jan 30 06:55:58 crc kubenswrapper[4520]: I0130 06:55:58.009915 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xdqs6"] Jan 30 06:55:58 crc kubenswrapper[4520]: I0130 06:55:58.015586 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9dkdm" Jan 30 06:55:58 crc kubenswrapper[4520]: I0130 06:55:58.118418 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd4p2\" (UniqueName: \"kubernetes.io/projected/793b9300-4e74-438c-94fa-40f43f962226-kube-api-access-gd4p2\") pod \"793b9300-4e74-438c-94fa-40f43f962226\" (UID: \"793b9300-4e74-438c-94fa-40f43f962226\") " Jan 30 06:55:58 crc kubenswrapper[4520]: I0130 06:55:58.123290 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/793b9300-4e74-438c-94fa-40f43f962226-kube-api-access-gd4p2" (OuterVolumeSpecName: "kube-api-access-gd4p2") pod "793b9300-4e74-438c-94fa-40f43f962226" (UID: "793b9300-4e74-438c-94fa-40f43f962226"). InnerVolumeSpecName "kube-api-access-gd4p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:55:58 crc kubenswrapper[4520]: I0130 06:55:58.219947 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd4p2\" (UniqueName: \"kubernetes.io/projected/793b9300-4e74-438c-94fa-40f43f962226-kube-api-access-gd4p2\") on node \"crc\" DevicePath \"\"" Jan 30 06:55:58 crc kubenswrapper[4520]: I0130 06:55:58.695677 4520 generic.go:334] "Generic (PLEG): container finished" podID="793b9300-4e74-438c-94fa-40f43f962226" containerID="d8514dfd8707c0fc4b0f7a5667d98dc602e1f974c56083307ac9f53b75098137" exitCode=0 Jan 30 06:55:58 crc kubenswrapper[4520]: I0130 06:55:58.695766 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9dkdm" event={"ID":"793b9300-4e74-438c-94fa-40f43f962226","Type":"ContainerDied","Data":"d8514dfd8707c0fc4b0f7a5667d98dc602e1f974c56083307ac9f53b75098137"} Jan 30 06:55:58 crc kubenswrapper[4520]: I0130 06:55:58.695799 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9dkdm" event={"ID":"793b9300-4e74-438c-94fa-40f43f962226","Type":"ContainerDied","Data":"a5658a7e9c97dd4eb8e88c0f596b5fde217969b8a4fed706f4abb7f4379edfb4"} Jan 30 06:55:58 crc kubenswrapper[4520]: I0130 06:55:58.695823 4520 scope.go:117] "RemoveContainer" containerID="d8514dfd8707c0fc4b0f7a5667d98dc602e1f974c56083307ac9f53b75098137" Jan 30 06:55:58 crc kubenswrapper[4520]: I0130 06:55:58.695971 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9dkdm" Jan 30 06:55:58 crc kubenswrapper[4520]: I0130 06:55:58.698425 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xdqs6" event={"ID":"7d581456-1ad4-4ae7-90c6-00b61382b16a","Type":"ContainerStarted","Data":"6946c8b20e23f73aaf5c7e1d1e4cfb304a1ec13d2f2813e5a2e7c213f818515a"} Jan 30 06:55:58 crc kubenswrapper[4520]: I0130 06:55:58.698455 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xdqs6" event={"ID":"7d581456-1ad4-4ae7-90c6-00b61382b16a","Type":"ContainerStarted","Data":"b1c2242148f93ddec18f06fa9a5273c097b57fb238b43fd255a590be6192c62f"} Jan 30 06:55:58 crc kubenswrapper[4520]: I0130 06:55:58.718745 4520 scope.go:117] "RemoveContainer" containerID="d8514dfd8707c0fc4b0f7a5667d98dc602e1f974c56083307ac9f53b75098137" Jan 30 06:55:58 crc kubenswrapper[4520]: I0130 06:55:58.719126 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-xdqs6" podStartSLOduration=1.258692395 podStartE2EDuration="1.719114395s" podCreationTimestamp="2026-01-30 06:55:57 +0000 UTC" firstStartedPulling="2026-01-30 06:55:58.024676976 +0000 UTC m=+671.653029156" lastFinishedPulling="2026-01-30 06:55:58.485098975 +0000 UTC m=+672.113451156" observedRunningTime="2026-01-30 06:55:58.715917406 +0000 UTC m=+672.344269587" watchObservedRunningTime="2026-01-30 06:55:58.719114395 +0000 UTC m=+672.347466567" Jan 30 06:55:58 crc kubenswrapper[4520]: E0130 06:55:58.719357 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8514dfd8707c0fc4b0f7a5667d98dc602e1f974c56083307ac9f53b75098137\": container with ID starting with d8514dfd8707c0fc4b0f7a5667d98dc602e1f974c56083307ac9f53b75098137 not found: ID does not exist" containerID="d8514dfd8707c0fc4b0f7a5667d98dc602e1f974c56083307ac9f53b75098137" Jan 30 06:55:58 crc kubenswrapper[4520]: I0130 06:55:58.719425 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8514dfd8707c0fc4b0f7a5667d98dc602e1f974c56083307ac9f53b75098137"} err="failed to get container status \"d8514dfd8707c0fc4b0f7a5667d98dc602e1f974c56083307ac9f53b75098137\": rpc error: code = NotFound desc = could not find container \"d8514dfd8707c0fc4b0f7a5667d98dc602e1f974c56083307ac9f53b75098137\": container with ID starting with d8514dfd8707c0fc4b0f7a5667d98dc602e1f974c56083307ac9f53b75098137 not found: ID does not exist" Jan 30 06:55:58 crc kubenswrapper[4520]: I0130 06:55:58.736273 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-9dkdm"] Jan 30 06:55:58 crc kubenswrapper[4520]: I0130 06:55:58.741674 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-9dkdm"] Jan 30 06:55:59 crc kubenswrapper[4520]: I0130 06:55:59.865212 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-9n5hv" Jan 30 06:56:00 crc kubenswrapper[4520]: I0130 06:56:00.350160 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-ld6tp" Jan 30 06:56:00 crc kubenswrapper[4520]: I0130 06:56:00.357638 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kp8f6" Jan 30 06:56:00 crc kubenswrapper[4520]: I0130 06:56:00.694091 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="793b9300-4e74-438c-94fa-40f43f962226" path="/var/lib/kubelet/pods/793b9300-4e74-438c-94fa-40f43f962226/volumes" Jan 30 06:56:07 crc kubenswrapper[4520]: I0130 06:56:07.598649 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-xdqs6" Jan 30 06:56:07 crc kubenswrapper[4520]: I0130 06:56:07.599208 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-xdqs6" Jan 30 06:56:07 crc kubenswrapper[4520]: I0130 06:56:07.628245 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-xdqs6" Jan 30 06:56:07 crc kubenswrapper[4520]: I0130 06:56:07.776466 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-xdqs6" Jan 30 06:56:08 crc kubenswrapper[4520]: I0130 06:56:08.917481 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1gh8xx"] Jan 30 06:56:08 crc kubenswrapper[4520]: E0130 06:56:08.917758 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="793b9300-4e74-438c-94fa-40f43f962226" containerName="registry-server" Jan 30 06:56:08 crc kubenswrapper[4520]: I0130 06:56:08.917773 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="793b9300-4e74-438c-94fa-40f43f962226" containerName="registry-server" Jan 30 06:56:08 crc kubenswrapper[4520]: I0130 06:56:08.917900 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="793b9300-4e74-438c-94fa-40f43f962226" containerName="registry-server" Jan 30 06:56:08 crc kubenswrapper[4520]: I0130 06:56:08.918647 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1gh8xx" Jan 30 06:56:08 crc kubenswrapper[4520]: I0130 06:56:08.928474 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-zwlsp" Jan 30 06:56:08 crc kubenswrapper[4520]: I0130 06:56:08.932980 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1gh8xx"] Jan 30 06:56:09 crc kubenswrapper[4520]: I0130 06:56:09.053789 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55528ebc-43c2-451c-aca9-0347a149bbc5-util\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1gh8xx\" (UID: \"55528ebc-43c2-451c-aca9-0347a149bbc5\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1gh8xx" Jan 30 06:56:09 crc kubenswrapper[4520]: I0130 06:56:09.054214 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkg8d\" (UniqueName: \"kubernetes.io/projected/55528ebc-43c2-451c-aca9-0347a149bbc5-kube-api-access-fkg8d\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1gh8xx\" (UID: \"55528ebc-43c2-451c-aca9-0347a149bbc5\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1gh8xx" Jan 30 06:56:09 crc kubenswrapper[4520]: I0130 06:56:09.054316 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55528ebc-43c2-451c-aca9-0347a149bbc5-bundle\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1gh8xx\" (UID: \"55528ebc-43c2-451c-aca9-0347a149bbc5\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1gh8xx" Jan 30 06:56:09 crc kubenswrapper[4520]: I0130 06:56:09.156722 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55528ebc-43c2-451c-aca9-0347a149bbc5-bundle\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1gh8xx\" (UID: \"55528ebc-43c2-451c-aca9-0347a149bbc5\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1gh8xx" Jan 30 06:56:09 crc kubenswrapper[4520]: I0130 06:56:09.156835 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55528ebc-43c2-451c-aca9-0347a149bbc5-util\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1gh8xx\" (UID: \"55528ebc-43c2-451c-aca9-0347a149bbc5\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1gh8xx" Jan 30 06:56:09 crc kubenswrapper[4520]: I0130 06:56:09.156902 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkg8d\" (UniqueName: \"kubernetes.io/projected/55528ebc-43c2-451c-aca9-0347a149bbc5-kube-api-access-fkg8d\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1gh8xx\" (UID: \"55528ebc-43c2-451c-aca9-0347a149bbc5\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1gh8xx" Jan 30 06:56:09 crc kubenswrapper[4520]: I0130 06:56:09.157410 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55528ebc-43c2-451c-aca9-0347a149bbc5-util\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1gh8xx\" (UID: \"55528ebc-43c2-451c-aca9-0347a149bbc5\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1gh8xx" Jan 30 06:56:09 crc kubenswrapper[4520]: I0130 06:56:09.157654 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55528ebc-43c2-451c-aca9-0347a149bbc5-bundle\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1gh8xx\" (UID: \"55528ebc-43c2-451c-aca9-0347a149bbc5\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1gh8xx" Jan 30 06:56:09 crc kubenswrapper[4520]: I0130 06:56:09.176605 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkg8d\" (UniqueName: \"kubernetes.io/projected/55528ebc-43c2-451c-aca9-0347a149bbc5-kube-api-access-fkg8d\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1gh8xx\" (UID: \"55528ebc-43c2-451c-aca9-0347a149bbc5\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1gh8xx" Jan 30 06:56:09 crc kubenswrapper[4520]: I0130 06:56:09.236123 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1gh8xx" Jan 30 06:56:09 crc kubenswrapper[4520]: I0130 06:56:09.623743 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1gh8xx"] Jan 30 06:56:09 crc kubenswrapper[4520]: I0130 06:56:09.772378 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1gh8xx" event={"ID":"55528ebc-43c2-451c-aca9-0347a149bbc5","Type":"ContainerStarted","Data":"e3b852d6374d7aef1ec7b076e056cc1aa28fde7ec174acdbee95d179e1dae358"} Jan 30 06:56:09 crc kubenswrapper[4520]: I0130 06:56:09.772422 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1gh8xx" event={"ID":"55528ebc-43c2-451c-aca9-0347a149bbc5","Type":"ContainerStarted","Data":"be0229042d9b56eb68655042b7e9bc9a9120aa03f4863af3588bb9888c284dee"} Jan 30 06:56:10 crc kubenswrapper[4520]: I0130 06:56:10.783099 4520 generic.go:334] "Generic (PLEG): container finished" podID="55528ebc-43c2-451c-aca9-0347a149bbc5" containerID="e3b852d6374d7aef1ec7b076e056cc1aa28fde7ec174acdbee95d179e1dae358" exitCode=0 Jan 30 06:56:10 crc kubenswrapper[4520]: I0130 06:56:10.783273 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1gh8xx" event={"ID":"55528ebc-43c2-451c-aca9-0347a149bbc5","Type":"ContainerDied","Data":"e3b852d6374d7aef1ec7b076e056cc1aa28fde7ec174acdbee95d179e1dae358"} Jan 30 06:56:11 crc kubenswrapper[4520]: I0130 06:56:11.792771 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1gh8xx" event={"ID":"55528ebc-43c2-451c-aca9-0347a149bbc5","Type":"ContainerStarted","Data":"e5542fc2f5eb9f1c4410aa10d1a7aa5d991bc5ae58b2b7368092ed141baaba01"} Jan 30 06:56:12 crc kubenswrapper[4520]: I0130 06:56:12.801818 4520 generic.go:334] "Generic (PLEG): container finished" podID="55528ebc-43c2-451c-aca9-0347a149bbc5" containerID="e5542fc2f5eb9f1c4410aa10d1a7aa5d991bc5ae58b2b7368092ed141baaba01" exitCode=0 Jan 30 06:56:12 crc kubenswrapper[4520]: I0130 06:56:12.801918 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1gh8xx" event={"ID":"55528ebc-43c2-451c-aca9-0347a149bbc5","Type":"ContainerDied","Data":"e5542fc2f5eb9f1c4410aa10d1a7aa5d991bc5ae58b2b7368092ed141baaba01"} Jan 30 06:56:13 crc kubenswrapper[4520]: I0130 06:56:13.810934 4520 generic.go:334] "Generic (PLEG): container finished" podID="55528ebc-43c2-451c-aca9-0347a149bbc5" containerID="fb1eb6f4b70f191b61e22a1ec8b904cf45cd200151b8ebe77b060b188c40b02b" exitCode=0 Jan 30 06:56:13 crc kubenswrapper[4520]: I0130 06:56:13.811046 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1gh8xx" event={"ID":"55528ebc-43c2-451c-aca9-0347a149bbc5","Type":"ContainerDied","Data":"fb1eb6f4b70f191b61e22a1ec8b904cf45cd200151b8ebe77b060b188c40b02b"} Jan 30 06:56:14 crc kubenswrapper[4520]: I0130 06:56:14.978497 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1gh8xx" Jan 30 06:56:15 crc kubenswrapper[4520]: I0130 06:56:15.150592 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55528ebc-43c2-451c-aca9-0347a149bbc5-bundle\") pod \"55528ebc-43c2-451c-aca9-0347a149bbc5\" (UID: \"55528ebc-43c2-451c-aca9-0347a149bbc5\") " Jan 30 06:56:15 crc kubenswrapper[4520]: I0130 06:56:15.150657 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkg8d\" (UniqueName: \"kubernetes.io/projected/55528ebc-43c2-451c-aca9-0347a149bbc5-kube-api-access-fkg8d\") pod \"55528ebc-43c2-451c-aca9-0347a149bbc5\" (UID: \"55528ebc-43c2-451c-aca9-0347a149bbc5\") " Jan 30 06:56:15 crc kubenswrapper[4520]: I0130 06:56:15.150695 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55528ebc-43c2-451c-aca9-0347a149bbc5-util\") pod \"55528ebc-43c2-451c-aca9-0347a149bbc5\" (UID: \"55528ebc-43c2-451c-aca9-0347a149bbc5\") " Jan 30 06:56:15 crc kubenswrapper[4520]: I0130 06:56:15.151477 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55528ebc-43c2-451c-aca9-0347a149bbc5-bundle" (OuterVolumeSpecName: "bundle") pod "55528ebc-43c2-451c-aca9-0347a149bbc5" (UID: "55528ebc-43c2-451c-aca9-0347a149bbc5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:56:15 crc kubenswrapper[4520]: I0130 06:56:15.157000 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55528ebc-43c2-451c-aca9-0347a149bbc5-kube-api-access-fkg8d" (OuterVolumeSpecName: "kube-api-access-fkg8d") pod "55528ebc-43c2-451c-aca9-0347a149bbc5" (UID: "55528ebc-43c2-451c-aca9-0347a149bbc5"). InnerVolumeSpecName "kube-api-access-fkg8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:56:15 crc kubenswrapper[4520]: I0130 06:56:15.158442 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55528ebc-43c2-451c-aca9-0347a149bbc5-util" (OuterVolumeSpecName: "util") pod "55528ebc-43c2-451c-aca9-0347a149bbc5" (UID: "55528ebc-43c2-451c-aca9-0347a149bbc5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:56:15 crc kubenswrapper[4520]: I0130 06:56:15.252123 4520 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55528ebc-43c2-451c-aca9-0347a149bbc5-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:56:15 crc kubenswrapper[4520]: I0130 06:56:15.252155 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkg8d\" (UniqueName: \"kubernetes.io/projected/55528ebc-43c2-451c-aca9-0347a149bbc5-kube-api-access-fkg8d\") on node \"crc\" DevicePath \"\"" Jan 30 06:56:15 crc kubenswrapper[4520]: I0130 06:56:15.252167 4520 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55528ebc-43c2-451c-aca9-0347a149bbc5-util\") on node \"crc\" DevicePath \"\"" Jan 30 06:56:15 crc kubenswrapper[4520]: I0130 06:56:15.826849 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1gh8xx" event={"ID":"55528ebc-43c2-451c-aca9-0347a149bbc5","Type":"ContainerDied","Data":"be0229042d9b56eb68655042b7e9bc9a9120aa03f4863af3588bb9888c284dee"} Jan 30 06:56:15 crc kubenswrapper[4520]: I0130 06:56:15.827134 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be0229042d9b56eb68655042b7e9bc9a9120aa03f4863af3588bb9888c284dee" Jan 30 06:56:15 crc kubenswrapper[4520]: I0130 06:56:15.826892 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1gh8xx" Jan 30 06:56:21 crc kubenswrapper[4520]: I0130 06:56:21.640656 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-757f46c65d-8vmmk"] Jan 30 06:56:21 crc kubenswrapper[4520]: E0130 06:56:21.641302 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55528ebc-43c2-451c-aca9-0347a149bbc5" containerName="pull" Jan 30 06:56:21 crc kubenswrapper[4520]: I0130 06:56:21.641319 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="55528ebc-43c2-451c-aca9-0347a149bbc5" containerName="pull" Jan 30 06:56:21 crc kubenswrapper[4520]: E0130 06:56:21.641332 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55528ebc-43c2-451c-aca9-0347a149bbc5" containerName="util" Jan 30 06:56:21 crc kubenswrapper[4520]: I0130 06:56:21.641338 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="55528ebc-43c2-451c-aca9-0347a149bbc5" containerName="util" Jan 30 06:56:21 crc kubenswrapper[4520]: E0130 06:56:21.641356 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55528ebc-43c2-451c-aca9-0347a149bbc5" containerName="extract" Jan 30 06:56:21 crc kubenswrapper[4520]: I0130 06:56:21.641361 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="55528ebc-43c2-451c-aca9-0347a149bbc5" containerName="extract" Jan 30 06:56:21 crc kubenswrapper[4520]: I0130 06:56:21.641452 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="55528ebc-43c2-451c-aca9-0347a149bbc5" containerName="extract" Jan 30 06:56:21 crc kubenswrapper[4520]: I0130 06:56:21.641842 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-8vmmk" Jan 30 06:56:21 crc kubenswrapper[4520]: I0130 06:56:21.643973 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-4mprn" Jan 30 06:56:21 crc kubenswrapper[4520]: I0130 06:56:21.672966 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-757f46c65d-8vmmk"] Jan 30 06:56:21 crc kubenswrapper[4520]: I0130 06:56:21.733919 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfzrz\" (UniqueName: \"kubernetes.io/projected/98ea90b4-3129-40ea-9499-f0ce52ba412f-kube-api-access-zfzrz\") pod \"openstack-operator-controller-init-757f46c65d-8vmmk\" (UID: \"98ea90b4-3129-40ea-9499-f0ce52ba412f\") " pod="openstack-operators/openstack-operator-controller-init-757f46c65d-8vmmk" Jan 30 06:56:21 crc kubenswrapper[4520]: I0130 06:56:21.835089 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfzrz\" (UniqueName: \"kubernetes.io/projected/98ea90b4-3129-40ea-9499-f0ce52ba412f-kube-api-access-zfzrz\") pod \"openstack-operator-controller-init-757f46c65d-8vmmk\" (UID: \"98ea90b4-3129-40ea-9499-f0ce52ba412f\") " pod="openstack-operators/openstack-operator-controller-init-757f46c65d-8vmmk" Jan 30 06:56:21 crc kubenswrapper[4520]: I0130 06:56:21.855711 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfzrz\" (UniqueName: \"kubernetes.io/projected/98ea90b4-3129-40ea-9499-f0ce52ba412f-kube-api-access-zfzrz\") pod \"openstack-operator-controller-init-757f46c65d-8vmmk\" (UID: \"98ea90b4-3129-40ea-9499-f0ce52ba412f\") " pod="openstack-operators/openstack-operator-controller-init-757f46c65d-8vmmk" Jan 30 06:56:21 crc kubenswrapper[4520]: I0130 06:56:21.957224 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-8vmmk" Jan 30 06:56:22 crc kubenswrapper[4520]: I0130 06:56:22.217726 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-757f46c65d-8vmmk"] Jan 30 06:56:22 crc kubenswrapper[4520]: I0130 06:56:22.875355 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-8vmmk" event={"ID":"98ea90b4-3129-40ea-9499-f0ce52ba412f","Type":"ContainerStarted","Data":"453e5e6d855533ee1e5bfa8f652a2b6e95a1330a90d9aa9dbb1e0607777860eb"} Jan 30 06:56:27 crc kubenswrapper[4520]: I0130 06:56:27.920492 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-8vmmk" event={"ID":"98ea90b4-3129-40ea-9499-f0ce52ba412f","Type":"ContainerStarted","Data":"6326c61215e1c04350f45f4369cd480b2e33608529c570c360e6ab8fb9065ec9"} Jan 30 06:56:27 crc kubenswrapper[4520]: I0130 06:56:27.921216 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-8vmmk" Jan 30 06:56:27 crc kubenswrapper[4520]: I0130 06:56:27.948427 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-8vmmk" podStartSLOduration=2.18510756 podStartE2EDuration="6.948415538s" podCreationTimestamp="2026-01-30 06:56:21 +0000 UTC" firstStartedPulling="2026-01-30 06:56:22.221945069 +0000 UTC m=+695.850297251" lastFinishedPulling="2026-01-30 06:56:26.985253048 +0000 UTC m=+700.613605229" observedRunningTime="2026-01-30 06:56:27.942063409 +0000 UTC m=+701.570415591" watchObservedRunningTime="2026-01-30 06:56:27.948415538 +0000 UTC m=+701.576767720" Jan 30 06:56:41 crc kubenswrapper[4520]: I0130 06:56:41.960350 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-8vmmk" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.205727 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-df2r7"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.207159 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-df2r7" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.210233 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-qklbf"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.211091 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-qklbf" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.212574 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xldf\" (UniqueName: \"kubernetes.io/projected/dda4dad2-f4d8-494e-9c59-28413625eb1d-kube-api-access-9xldf\") pod \"cinder-operator-controller-manager-8d874c8fc-qklbf\" (UID: \"dda4dad2-f4d8-494e-9c59-28413625eb1d\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-qklbf" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.212712 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljr7w\" (UniqueName: \"kubernetes.io/projected/d34fd2f5-b868-4eb8-9708-48b5e31e1397-kube-api-access-ljr7w\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-df2r7\" (UID: \"d34fd2f5-b868-4eb8-9708-48b5e31e1397\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-df2r7" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.214262 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-9dbf8" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.214629 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-scdgp" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.229213 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-df2r7"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.236568 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-qklbf"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.244700 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-w4cwl"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.245896 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-w4cwl" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.249591 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-5l8pp" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.257510 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-gms89"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.258391 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gms89" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.265613 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-w4cwl"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.274210 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-tmt49" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.302005 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-gms89"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.309149 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-6hlhp"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.310102 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-6hlhp" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.314243 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-59q72" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.314481 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-6hlhp"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.320614 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qpjm\" (UniqueName: \"kubernetes.io/projected/7ac2569b-0787-4f14-9039-a7541c6123e6-kube-api-access-2qpjm\") pod \"glance-operator-controller-manager-8886f4c47-gms89\" (UID: \"7ac2569b-0787-4f14-9039-a7541c6123e6\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gms89" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.320784 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx487\" (UniqueName: \"kubernetes.io/projected/fbf504d7-8829-43eb-983a-e7be0f5929ac-kube-api-access-wx487\") pod \"heat-operator-controller-manager-69d6db494d-6hlhp\" (UID: \"fbf504d7-8829-43eb-983a-e7be0f5929ac\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-6hlhp" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.320892 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xldf\" (UniqueName: \"kubernetes.io/projected/dda4dad2-f4d8-494e-9c59-28413625eb1d-kube-api-access-9xldf\") pod \"cinder-operator-controller-manager-8d874c8fc-qklbf\" (UID: \"dda4dad2-f4d8-494e-9c59-28413625eb1d\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-qklbf" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.320979 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67kg5\" (UniqueName: \"kubernetes.io/projected/ff428ec2-c3cf-413e-ac23-8fe55a37d261-kube-api-access-67kg5\") pod \"designate-operator-controller-manager-6d9697b7f4-w4cwl\" (UID: \"ff428ec2-c3cf-413e-ac23-8fe55a37d261\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-w4cwl" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.321096 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljr7w\" (UniqueName: \"kubernetes.io/projected/d34fd2f5-b868-4eb8-9708-48b5e31e1397-kube-api-access-ljr7w\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-df2r7\" (UID: \"d34fd2f5-b868-4eb8-9708-48b5e31e1397\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-df2r7" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.336341 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-jfrp7"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.337113 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-jfrp7" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.343289 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-h2465" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.343565 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.343758 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-kf987"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.344411 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-kf987" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.351347 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-2mh9q" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.351502 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-gfpw2"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.352211 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-gfpw2" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.354025 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-jfrp7"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.354310 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-djt2s" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.355379 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xldf\" (UniqueName: \"kubernetes.io/projected/dda4dad2-f4d8-494e-9c59-28413625eb1d-kube-api-access-9xldf\") pod \"cinder-operator-controller-manager-8d874c8fc-qklbf\" (UID: \"dda4dad2-f4d8-494e-9c59-28413625eb1d\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-qklbf" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.364405 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljr7w\" (UniqueName: \"kubernetes.io/projected/d34fd2f5-b868-4eb8-9708-48b5e31e1397-kube-api-access-ljr7w\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-df2r7\" (UID: \"d34fd2f5-b868-4eb8-9708-48b5e31e1397\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-df2r7" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.364666 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-kf987"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.380489 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-gfpw2"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.406181 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-cnhtx"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.408641 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-cnhtx" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.413697 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-xszsd" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.413998 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-cnhtx"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.427970 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qpjm\" (UniqueName: \"kubernetes.io/projected/7ac2569b-0787-4f14-9039-a7541c6123e6-kube-api-access-2qpjm\") pod \"glance-operator-controller-manager-8886f4c47-gms89\" (UID: \"7ac2569b-0787-4f14-9039-a7541c6123e6\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gms89" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.428078 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-757l4\" (UniqueName: \"kubernetes.io/projected/8da917f8-3f81-4867-9a6f-ac261284771c-kube-api-access-757l4\") pod \"ironic-operator-controller-manager-5f4b8bd54d-gfpw2\" (UID: \"8da917f8-3f81-4867-9a6f-ac261284771c\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-gfpw2" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.428154 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx487\" (UniqueName: \"kubernetes.io/projected/fbf504d7-8829-43eb-983a-e7be0f5929ac-kube-api-access-wx487\") pod \"heat-operator-controller-manager-69d6db494d-6hlhp\" (UID: \"fbf504d7-8829-43eb-983a-e7be0f5929ac\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-6hlhp" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.428212 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t7x8\" (UniqueName: \"kubernetes.io/projected/b7d9e5dd-5b3b-4aaa-834c-74029a7de138-kube-api-access-7t7x8\") pod \"keystone-operator-controller-manager-84f48565d4-cnhtx\" (UID: \"b7d9e5dd-5b3b-4aaa-834c-74029a7de138\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-cnhtx" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.428246 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6gs2\" (UniqueName: \"kubernetes.io/projected/9ab80be7-f6c7-420b-996c-3a373886483f-kube-api-access-l6gs2\") pod \"horizon-operator-controller-manager-5fb775575f-kf987\" (UID: \"9ab80be7-f6c7-420b-996c-3a373886483f\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-kf987" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.428295 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67kg5\" (UniqueName: \"kubernetes.io/projected/ff428ec2-c3cf-413e-ac23-8fe55a37d261-kube-api-access-67kg5\") pod \"designate-operator-controller-manager-6d9697b7f4-w4cwl\" (UID: \"ff428ec2-c3cf-413e-ac23-8fe55a37d261\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-w4cwl" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.428369 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cxlt\" (UniqueName: \"kubernetes.io/projected/cdf5fc79-647e-4d70-8785-682d7f27ce10-kube-api-access-9cxlt\") pod \"infra-operator-controller-manager-79955696d6-jfrp7\" (UID: \"cdf5fc79-647e-4d70-8785-682d7f27ce10\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-jfrp7" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.428406 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cdf5fc79-647e-4d70-8785-682d7f27ce10-cert\") pod \"infra-operator-controller-manager-79955696d6-jfrp7\" (UID: \"cdf5fc79-647e-4d70-8785-682d7f27ce10\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-jfrp7" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.460853 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67kg5\" (UniqueName: \"kubernetes.io/projected/ff428ec2-c3cf-413e-ac23-8fe55a37d261-kube-api-access-67kg5\") pod \"designate-operator-controller-manager-6d9697b7f4-w4cwl\" (UID: \"ff428ec2-c3cf-413e-ac23-8fe55a37d261\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-w4cwl" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.471060 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-52h27"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.472752 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-52h27" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.481017 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-52h27"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.481156 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-4vw7k" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.511229 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx487\" (UniqueName: \"kubernetes.io/projected/fbf504d7-8829-43eb-983a-e7be0f5929ac-kube-api-access-wx487\") pod \"heat-operator-controller-manager-69d6db494d-6hlhp\" (UID: \"fbf504d7-8829-43eb-983a-e7be0f5929ac\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-6hlhp" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.511700 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qpjm\" (UniqueName: \"kubernetes.io/projected/7ac2569b-0787-4f14-9039-a7541c6123e6-kube-api-access-2qpjm\") pod \"glance-operator-controller-manager-8886f4c47-gms89\" (UID: \"7ac2569b-0787-4f14-9039-a7541c6123e6\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gms89" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.513022 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-sws5x"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.514201 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-sws5x" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.518247 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-npdpx" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.525039 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-df2r7" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.529743 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8rlp\" (UniqueName: \"kubernetes.io/projected/6bb8d69e-cfd3-4d0f-9c93-53716539e927-kube-api-access-v8rlp\") pod \"mariadb-operator-controller-manager-67bf948998-sws5x\" (UID: \"6bb8d69e-cfd3-4d0f-9c93-53716539e927\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-sws5x" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.529794 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t7x8\" (UniqueName: \"kubernetes.io/projected/b7d9e5dd-5b3b-4aaa-834c-74029a7de138-kube-api-access-7t7x8\") pod \"keystone-operator-controller-manager-84f48565d4-cnhtx\" (UID: \"b7d9e5dd-5b3b-4aaa-834c-74029a7de138\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-cnhtx" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.529822 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6gs2\" (UniqueName: \"kubernetes.io/projected/9ab80be7-f6c7-420b-996c-3a373886483f-kube-api-access-l6gs2\") pod \"horizon-operator-controller-manager-5fb775575f-kf987\" (UID: \"9ab80be7-f6c7-420b-996c-3a373886483f\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-kf987" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.529895 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cxlt\" (UniqueName: \"kubernetes.io/projected/cdf5fc79-647e-4d70-8785-682d7f27ce10-kube-api-access-9cxlt\") pod \"infra-operator-controller-manager-79955696d6-jfrp7\" (UID: \"cdf5fc79-647e-4d70-8785-682d7f27ce10\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-jfrp7" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.529920 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cdf5fc79-647e-4d70-8785-682d7f27ce10-cert\") pod \"infra-operator-controller-manager-79955696d6-jfrp7\" (UID: \"cdf5fc79-647e-4d70-8785-682d7f27ce10\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-jfrp7" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.529940 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b5b9\" (UniqueName: \"kubernetes.io/projected/cd83993b-94e2-438b-9f19-8179f70b4a0e-kube-api-access-4b5b9\") pod \"manila-operator-controller-manager-7dd968899f-52h27\" (UID: \"cd83993b-94e2-438b-9f19-8179f70b4a0e\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-52h27" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.529980 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-757l4\" (UniqueName: \"kubernetes.io/projected/8da917f8-3f81-4867-9a6f-ac261284771c-kube-api-access-757l4\") pod \"ironic-operator-controller-manager-5f4b8bd54d-gfpw2\" (UID: \"8da917f8-3f81-4867-9a6f-ac261284771c\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-gfpw2" Jan 30 06:57:01 crc kubenswrapper[4520]: E0130 06:57:01.530655 4520 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 06:57:01 crc kubenswrapper[4520]: E0130 06:57:01.530712 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdf5fc79-647e-4d70-8785-682d7f27ce10-cert podName:cdf5fc79-647e-4d70-8785-682d7f27ce10 nodeName:}" failed. No retries permitted until 2026-01-30 06:57:02.030696333 +0000 UTC m=+735.659048503 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cdf5fc79-647e-4d70-8785-682d7f27ce10-cert") pod "infra-operator-controller-manager-79955696d6-jfrp7" (UID: "cdf5fc79-647e-4d70-8785-682d7f27ce10") : secret "infra-operator-webhook-server-cert" not found Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.530877 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-l28fb"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.535745 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l28fb" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.536175 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-qklbf" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.546538 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-9sjbr"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.547289 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-9sjbr" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.562643 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-w4cwl" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.576654 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-l28fb"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.576797 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cxlt\" (UniqueName: \"kubernetes.io/projected/cdf5fc79-647e-4d70-8785-682d7f27ce10-kube-api-access-9cxlt\") pod \"infra-operator-controller-manager-79955696d6-jfrp7\" (UID: \"cdf5fc79-647e-4d70-8785-682d7f27ce10\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-jfrp7" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.576935 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-cwlv4" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.579122 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-bhfxp" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.584219 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6gs2\" (UniqueName: \"kubernetes.io/projected/9ab80be7-f6c7-420b-996c-3a373886483f-kube-api-access-l6gs2\") pod \"horizon-operator-controller-manager-5fb775575f-kf987\" (UID: \"9ab80be7-f6c7-420b-996c-3a373886483f\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-kf987" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.584728 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-757l4\" (UniqueName: \"kubernetes.io/projected/8da917f8-3f81-4867-9a6f-ac261284771c-kube-api-access-757l4\") pod \"ironic-operator-controller-manager-5f4b8bd54d-gfpw2\" (UID: \"8da917f8-3f81-4867-9a6f-ac261284771c\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-gfpw2" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.591368 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gms89" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.599572 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t7x8\" (UniqueName: \"kubernetes.io/projected/b7d9e5dd-5b3b-4aaa-834c-74029a7de138-kube-api-access-7t7x8\") pod \"keystone-operator-controller-manager-84f48565d4-cnhtx\" (UID: \"b7d9e5dd-5b3b-4aaa-834c-74029a7de138\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-cnhtx" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.599628 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-sws5x"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.609086 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-9sjbr"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.622168 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-kh9zj"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.623107 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-kh9zj" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.632910 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-nxg8g" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.635417 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7mww\" (UniqueName: \"kubernetes.io/projected/255320d6-1503-4351-ad06-7794cbbdd120-kube-api-access-t7mww\") pod \"nova-operator-controller-manager-55bff696bd-9sjbr\" (UID: \"255320d6-1503-4351-ad06-7794cbbdd120\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-9sjbr" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.635508 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b54w\" (UniqueName: \"kubernetes.io/projected/27734711-fc9e-4ddf-acc0-47761e072c20-kube-api-access-2b54w\") pod \"octavia-operator-controller-manager-6687f8d877-kh9zj\" (UID: \"27734711-fc9e-4ddf-acc0-47761e072c20\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-kh9zj" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.635634 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b5b9\" (UniqueName: \"kubernetes.io/projected/cd83993b-94e2-438b-9f19-8179f70b4a0e-kube-api-access-4b5b9\") pod \"manila-operator-controller-manager-7dd968899f-52h27\" (UID: \"cd83993b-94e2-438b-9f19-8179f70b4a0e\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-52h27" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.635713 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qrsz\" (UniqueName: \"kubernetes.io/projected/c2d1bf96-9105-4d5d-8dcd-174c098c76d9-kube-api-access-7qrsz\") pod \"neutron-operator-controller-manager-585dbc889-l28fb\" (UID: \"c2d1bf96-9105-4d5d-8dcd-174c098c76d9\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l28fb" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.635825 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8rlp\" (UniqueName: \"kubernetes.io/projected/6bb8d69e-cfd3-4d0f-9c93-53716539e927-kube-api-access-v8rlp\") pod \"mariadb-operator-controller-manager-67bf948998-sws5x\" (UID: \"6bb8d69e-cfd3-4d0f-9c93-53716539e927\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-sws5x" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.637479 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-6hlhp" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.642247 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.643166 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.646940 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-kh9zj"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.648227 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-lkmjh" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.648404 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.655925 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8rlp\" (UniqueName: \"kubernetes.io/projected/6bb8d69e-cfd3-4d0f-9c93-53716539e927-kube-api-access-v8rlp\") pod \"mariadb-operator-controller-manager-67bf948998-sws5x\" (UID: \"6bb8d69e-cfd3-4d0f-9c93-53716539e927\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-sws5x" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.655954 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b5b9\" (UniqueName: \"kubernetes.io/projected/cd83993b-94e2-438b-9f19-8179f70b4a0e-kube-api-access-4b5b9\") pod \"manila-operator-controller-manager-7dd968899f-52h27\" (UID: \"cd83993b-94e2-438b-9f19-8179f70b4a0e\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-52h27" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.672690 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.677687 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-2t4v6"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.680221 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2t4v6" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.685160 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-j2t49"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.686162 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-xrf56" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.686170 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-j2t49" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.696181 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-77cjx" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.701432 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-2t4v6"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.711379 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-j2t49"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.717677 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-rm676"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.718630 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rm676" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.720367 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-ppfng" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.728093 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-rm676"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.731571 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-kf987" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.733295 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-pmrw9"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.734115 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-pmrw9" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.739870 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-wzk44" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.747452 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-pmrw9"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.752106 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7mww\" (UniqueName: \"kubernetes.io/projected/255320d6-1503-4351-ad06-7794cbbdd120-kube-api-access-t7mww\") pod \"nova-operator-controller-manager-55bff696bd-9sjbr\" (UID: \"255320d6-1503-4351-ad06-7794cbbdd120\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-9sjbr" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.752206 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b54w\" (UniqueName: \"kubernetes.io/projected/27734711-fc9e-4ddf-acc0-47761e072c20-kube-api-access-2b54w\") pod \"octavia-operator-controller-manager-6687f8d877-kh9zj\" (UID: \"27734711-fc9e-4ddf-acc0-47761e072c20\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-kh9zj" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.752249 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9fp4\" (UniqueName: \"kubernetes.io/projected/80a81bc2-ebfd-4fa9-80ed-ddb70fb32677-kube-api-access-m9fp4\") pod \"swift-operator-controller-manager-68fc8c869-rm676\" (UID: \"80a81bc2-ebfd-4fa9-80ed-ddb70fb32677\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rm676" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.752322 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9cqd\" (UniqueName: \"kubernetes.io/projected/06701a52-2501-4045-b254-90b886c11b47-kube-api-access-p9cqd\") pod \"ovn-operator-controller-manager-788c46999f-2t4v6\" (UID: \"06701a52-2501-4045-b254-90b886c11b47\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2t4v6" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.752356 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv2ts\" (UniqueName: \"kubernetes.io/projected/3099544c-3b89-415c-aea6-f56b7581a803-kube-api-access-rv2ts\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq\" (UID: \"3099544c-3b89-415c-aea6-f56b7581a803\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.752382 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qrsz\" (UniqueName: \"kubernetes.io/projected/c2d1bf96-9105-4d5d-8dcd-174c098c76d9-kube-api-access-7qrsz\") pod \"neutron-operator-controller-manager-585dbc889-l28fb\" (UID: \"c2d1bf96-9105-4d5d-8dcd-174c098c76d9\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l28fb" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.752458 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wrzm\" (UniqueName: \"kubernetes.io/projected/5c259a8d-a9cf-46c1-84b3-dbf5e2fb6e40-kube-api-access-9wrzm\") pod \"placement-operator-controller-manager-5b964cf4cd-j2t49\" (UID: \"5c259a8d-a9cf-46c1-84b3-dbf5e2fb6e40\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-j2t49" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.752507 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3099544c-3b89-415c-aea6-f56b7581a803-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq\" (UID: \"3099544c-3b89-415c-aea6-f56b7581a803\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.753113 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-gfpw2" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.769099 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7mww\" (UniqueName: \"kubernetes.io/projected/255320d6-1503-4351-ad06-7794cbbdd120-kube-api-access-t7mww\") pod \"nova-operator-controller-manager-55bff696bd-9sjbr\" (UID: \"255320d6-1503-4351-ad06-7794cbbdd120\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-9sjbr" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.769302 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qrsz\" (UniqueName: \"kubernetes.io/projected/c2d1bf96-9105-4d5d-8dcd-174c098c76d9-kube-api-access-7qrsz\") pod \"neutron-operator-controller-manager-585dbc889-l28fb\" (UID: \"c2d1bf96-9105-4d5d-8dcd-174c098c76d9\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l28fb" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.784457 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-cnhtx" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.792964 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-2ppq4"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.799414 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2ppq4" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.800383 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b54w\" (UniqueName: \"kubernetes.io/projected/27734711-fc9e-4ddf-acc0-47761e072c20-kube-api-access-2b54w\") pod \"octavia-operator-controller-manager-6687f8d877-kh9zj\" (UID: \"27734711-fc9e-4ddf-acc0-47761e072c20\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-kh9zj" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.803465 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-8gxxt" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.852262 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-2ppq4"] Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.856628 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9fp4\" (UniqueName: \"kubernetes.io/projected/80a81bc2-ebfd-4fa9-80ed-ddb70fb32677-kube-api-access-m9fp4\") pod \"swift-operator-controller-manager-68fc8c869-rm676\" (UID: \"80a81bc2-ebfd-4fa9-80ed-ddb70fb32677\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rm676" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.856701 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9cqd\" (UniqueName: \"kubernetes.io/projected/06701a52-2501-4045-b254-90b886c11b47-kube-api-access-p9cqd\") pod \"ovn-operator-controller-manager-788c46999f-2t4v6\" (UID: \"06701a52-2501-4045-b254-90b886c11b47\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2t4v6" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.856730 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv2ts\" (UniqueName: \"kubernetes.io/projected/3099544c-3b89-415c-aea6-f56b7581a803-kube-api-access-rv2ts\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq\" (UID: \"3099544c-3b89-415c-aea6-f56b7581a803\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.856785 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wrzm\" (UniqueName: \"kubernetes.io/projected/5c259a8d-a9cf-46c1-84b3-dbf5e2fb6e40-kube-api-access-9wrzm\") pod \"placement-operator-controller-manager-5b964cf4cd-j2t49\" (UID: \"5c259a8d-a9cf-46c1-84b3-dbf5e2fb6e40\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-j2t49" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.856821 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kgt8\" (UniqueName: \"kubernetes.io/projected/4e1b6bdd-a23f-4023-861a-e28c2dd5e640-kube-api-access-2kgt8\") pod \"telemetry-operator-controller-manager-64b5b76f97-pmrw9\" (UID: \"4e1b6bdd-a23f-4023-861a-e28c2dd5e640\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-pmrw9" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.856841 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3099544c-3b89-415c-aea6-f56b7581a803-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq\" (UID: \"3099544c-3b89-415c-aea6-f56b7581a803\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.856862 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsmpp\" (UniqueName: \"kubernetes.io/projected/cd6316df-91d5-4c8c-84ac-d02c952a05c9-kube-api-access-hsmpp\") pod \"test-operator-controller-manager-56f8bfcd9f-2ppq4\" (UID: \"cd6316df-91d5-4c8c-84ac-d02c952a05c9\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2ppq4" Jan 30 06:57:01 crc kubenswrapper[4520]: E0130 06:57:01.857491 4520 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 06:57:01 crc kubenswrapper[4520]: E0130 06:57:01.857546 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3099544c-3b89-415c-aea6-f56b7581a803-cert podName:3099544c-3b89-415c-aea6-f56b7581a803 nodeName:}" failed. No retries permitted until 2026-01-30 06:57:02.35753335 +0000 UTC m=+735.985885532 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3099544c-3b89-415c-aea6-f56b7581a803-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq" (UID: "3099544c-3b89-415c-aea6-f56b7581a803") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.857762 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-52h27" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.886590 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-sws5x" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.887446 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wrzm\" (UniqueName: \"kubernetes.io/projected/5c259a8d-a9cf-46c1-84b3-dbf5e2fb6e40-kube-api-access-9wrzm\") pod \"placement-operator-controller-manager-5b964cf4cd-j2t49\" (UID: \"5c259a8d-a9cf-46c1-84b3-dbf5e2fb6e40\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-j2t49" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.888737 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9fp4\" (UniqueName: \"kubernetes.io/projected/80a81bc2-ebfd-4fa9-80ed-ddb70fb32677-kube-api-access-m9fp4\") pod \"swift-operator-controller-manager-68fc8c869-rm676\" (UID: \"80a81bc2-ebfd-4fa9-80ed-ddb70fb32677\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rm676" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.894036 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9cqd\" (UniqueName: \"kubernetes.io/projected/06701a52-2501-4045-b254-90b886c11b47-kube-api-access-p9cqd\") pod \"ovn-operator-controller-manager-788c46999f-2t4v6\" (UID: \"06701a52-2501-4045-b254-90b886c11b47\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2t4v6" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.903166 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv2ts\" (UniqueName: \"kubernetes.io/projected/3099544c-3b89-415c-aea6-f56b7581a803-kube-api-access-rv2ts\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq\" (UID: \"3099544c-3b89-415c-aea6-f56b7581a803\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.905418 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l28fb" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.922940 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-9sjbr" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.957797 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-kh9zj" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.960870 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kgt8\" (UniqueName: \"kubernetes.io/projected/4e1b6bdd-a23f-4023-861a-e28c2dd5e640-kube-api-access-2kgt8\") pod \"telemetry-operator-controller-manager-64b5b76f97-pmrw9\" (UID: \"4e1b6bdd-a23f-4023-861a-e28c2dd5e640\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-pmrw9" Jan 30 06:57:01 crc kubenswrapper[4520]: I0130 06:57:01.980522 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsmpp\" (UniqueName: \"kubernetes.io/projected/cd6316df-91d5-4c8c-84ac-d02c952a05c9-kube-api-access-hsmpp\") pod \"test-operator-controller-manager-56f8bfcd9f-2ppq4\" (UID: \"cd6316df-91d5-4c8c-84ac-d02c952a05c9\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2ppq4" Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.018741 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2t4v6" Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.022425 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kgt8\" (UniqueName: \"kubernetes.io/projected/4e1b6bdd-a23f-4023-861a-e28c2dd5e640-kube-api-access-2kgt8\") pod \"telemetry-operator-controller-manager-64b5b76f97-pmrw9\" (UID: \"4e1b6bdd-a23f-4023-861a-e28c2dd5e640\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-pmrw9" Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.025206 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-j2t49" Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.036190 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rm676" Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.049744 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsmpp\" (UniqueName: \"kubernetes.io/projected/cd6316df-91d5-4c8c-84ac-d02c952a05c9-kube-api-access-hsmpp\") pod \"test-operator-controller-manager-56f8bfcd9f-2ppq4\" (UID: \"cd6316df-91d5-4c8c-84ac-d02c952a05c9\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2ppq4" Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.059844 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-pmrw9" Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.083319 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cdf5fc79-647e-4d70-8785-682d7f27ce10-cert\") pod \"infra-operator-controller-manager-79955696d6-jfrp7\" (UID: \"cdf5fc79-647e-4d70-8785-682d7f27ce10\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-jfrp7" Jan 30 06:57:02 crc kubenswrapper[4520]: E0130 06:57:02.083441 4520 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 06:57:02 crc kubenswrapper[4520]: E0130 06:57:02.083496 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdf5fc79-647e-4d70-8785-682d7f27ce10-cert podName:cdf5fc79-647e-4d70-8785-682d7f27ce10 nodeName:}" failed. No retries permitted until 2026-01-30 06:57:03.083481258 +0000 UTC m=+736.711833438 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cdf5fc79-647e-4d70-8785-682d7f27ce10-cert") pod "infra-operator-controller-manager-79955696d6-jfrp7" (UID: "cdf5fc79-647e-4d70-8785-682d7f27ce10") : secret "infra-operator-webhook-server-cert" not found Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.126441 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2ppq4" Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.135846 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-6ksdm"] Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.144240 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-6ksdm" Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.148327 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-9bgxr" Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.157122 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-6ksdm"] Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.186351 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jskwv\" (UniqueName: \"kubernetes.io/projected/45285265-5fe0-4c19-a169-fe2598b27a5d-kube-api-access-jskwv\") pod \"watcher-operator-controller-manager-564965969-6ksdm\" (UID: \"45285265-5fe0-4c19-a169-fe2598b27a5d\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-6ksdm" Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.187826 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6b6f655c79-cxn6m"] Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.188965 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-cxn6m" Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.193525 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.193846 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6b6f655c79-cxn6m"] Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.193958 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-lggkz" Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.194122 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.245628 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zr9jd"] Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.247462 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zr9jd" Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.252807 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-hsznp" Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.265012 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zr9jd"] Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.288782 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jskwv\" (UniqueName: \"kubernetes.io/projected/45285265-5fe0-4c19-a169-fe2598b27a5d-kube-api-access-jskwv\") pod \"watcher-operator-controller-manager-564965969-6ksdm\" (UID: \"45285265-5fe0-4c19-a169-fe2598b27a5d\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-6ksdm" Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.288924 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c2f02050-fdee-42d1-87c0-74104b2aa6bc-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-cxn6m\" (UID: \"c2f02050-fdee-42d1-87c0-74104b2aa6bc\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-cxn6m" Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.288960 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p24kt\" (UniqueName: \"kubernetes.io/projected/c2f02050-fdee-42d1-87c0-74104b2aa6bc-kube-api-access-p24kt\") pod \"openstack-operator-controller-manager-6b6f655c79-cxn6m\" (UID: \"c2f02050-fdee-42d1-87c0-74104b2aa6bc\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-cxn6m" Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.289100 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnm8c\" (UniqueName: \"kubernetes.io/projected/6b8a65a3-bc6c-473d-892f-4d80011c854f-kube-api-access-pnm8c\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zr9jd\" (UID: \"6b8a65a3-bc6c-473d-892f-4d80011c854f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zr9jd" Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.289167 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c2f02050-fdee-42d1-87c0-74104b2aa6bc-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-cxn6m\" (UID: \"c2f02050-fdee-42d1-87c0-74104b2aa6bc\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-cxn6m" Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.314117 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jskwv\" (UniqueName: \"kubernetes.io/projected/45285265-5fe0-4c19-a169-fe2598b27a5d-kube-api-access-jskwv\") pod \"watcher-operator-controller-manager-564965969-6ksdm\" (UID: \"45285265-5fe0-4c19-a169-fe2598b27a5d\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-6ksdm" Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.353655 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-df2r7"] Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.394380 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnm8c\" (UniqueName: \"kubernetes.io/projected/6b8a65a3-bc6c-473d-892f-4d80011c854f-kube-api-access-pnm8c\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zr9jd\" (UID: \"6b8a65a3-bc6c-473d-892f-4d80011c854f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zr9jd" Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.394439 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c2f02050-fdee-42d1-87c0-74104b2aa6bc-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-cxn6m\" (UID: \"c2f02050-fdee-42d1-87c0-74104b2aa6bc\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-cxn6m" Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.394560 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c2f02050-fdee-42d1-87c0-74104b2aa6bc-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-cxn6m\" (UID: \"c2f02050-fdee-42d1-87c0-74104b2aa6bc\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-cxn6m" Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.394581 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p24kt\" (UniqueName: \"kubernetes.io/projected/c2f02050-fdee-42d1-87c0-74104b2aa6bc-kube-api-access-p24kt\") pod \"openstack-operator-controller-manager-6b6f655c79-cxn6m\" (UID: \"c2f02050-fdee-42d1-87c0-74104b2aa6bc\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-cxn6m" Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.394664 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3099544c-3b89-415c-aea6-f56b7581a803-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq\" (UID: \"3099544c-3b89-415c-aea6-f56b7581a803\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq" Jan 30 06:57:02 crc kubenswrapper[4520]: E0130 06:57:02.394808 4520 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 06:57:02 crc kubenswrapper[4520]: E0130 06:57:02.394858 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3099544c-3b89-415c-aea6-f56b7581a803-cert podName:3099544c-3b89-415c-aea6-f56b7581a803 nodeName:}" failed. No retries permitted until 2026-01-30 06:57:03.394843174 +0000 UTC m=+737.023195355 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3099544c-3b89-415c-aea6-f56b7581a803-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq" (UID: "3099544c-3b89-415c-aea6-f56b7581a803") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 06:57:02 crc kubenswrapper[4520]: E0130 06:57:02.394852 4520 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 06:57:02 crc kubenswrapper[4520]: E0130 06:57:02.394971 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2f02050-fdee-42d1-87c0-74104b2aa6bc-metrics-certs podName:c2f02050-fdee-42d1-87c0-74104b2aa6bc nodeName:}" failed. No retries permitted until 2026-01-30 06:57:02.894938844 +0000 UTC m=+736.523291024 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c2f02050-fdee-42d1-87c0-74104b2aa6bc-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-cxn6m" (UID: "c2f02050-fdee-42d1-87c0-74104b2aa6bc") : secret "metrics-server-cert" not found Jan 30 06:57:02 crc kubenswrapper[4520]: E0130 06:57:02.395118 4520 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 06:57:02 crc kubenswrapper[4520]: E0130 06:57:02.395155 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2f02050-fdee-42d1-87c0-74104b2aa6bc-webhook-certs podName:c2f02050-fdee-42d1-87c0-74104b2aa6bc nodeName:}" failed. No retries permitted until 2026-01-30 06:57:02.895145482 +0000 UTC m=+736.523497663 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c2f02050-fdee-42d1-87c0-74104b2aa6bc-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-cxn6m" (UID: "c2f02050-fdee-42d1-87c0-74104b2aa6bc") : secret "webhook-server-cert" not found Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.423349 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-qklbf"] Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.428574 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-w4cwl"] Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.447263 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnm8c\" (UniqueName: \"kubernetes.io/projected/6b8a65a3-bc6c-473d-892f-4d80011c854f-kube-api-access-pnm8c\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zr9jd\" (UID: \"6b8a65a3-bc6c-473d-892f-4d80011c854f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zr9jd" Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.455024 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p24kt\" (UniqueName: \"kubernetes.io/projected/c2f02050-fdee-42d1-87c0-74104b2aa6bc-kube-api-access-p24kt\") pod \"openstack-operator-controller-manager-6b6f655c79-cxn6m\" (UID: \"c2f02050-fdee-42d1-87c0-74104b2aa6bc\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-cxn6m" Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.476597 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-6ksdm" Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.516975 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-gfpw2"] Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.526042 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-kf987"] Jan 30 06:57:02 crc kubenswrapper[4520]: W0130 06:57:02.571925 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8da917f8_3f81_4867_9a6f_ac261284771c.slice/crio-69c22bed706eefbd2682387fa8aa48957a10ceb8194f3ff9cdc044541a032da3 WatchSource:0}: Error finding container 69c22bed706eefbd2682387fa8aa48957a10ceb8194f3ff9cdc044541a032da3: Status 404 returned error can't find the container with id 69c22bed706eefbd2682387fa8aa48957a10ceb8194f3ff9cdc044541a032da3 Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.579015 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-gms89"] Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.586936 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zr9jd" Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.603078 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-6hlhp"] Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.881563 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-cnhtx"] Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.893617 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-9sjbr"] Jan 30 06:57:02 crc kubenswrapper[4520]: W0130 06:57:02.939946 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod255320d6_1503_4351_ad06_7794cbbdd120.slice/crio-30d2fa17fb88f5529e8a4e99ee28466c55636c0330b9126953f7326cbf28fe69 WatchSource:0}: Error finding container 30d2fa17fb88f5529e8a4e99ee28466c55636c0330b9126953f7326cbf28fe69: Status 404 returned error can't find the container with id 30d2fa17fb88f5529e8a4e99ee28466c55636c0330b9126953f7326cbf28fe69 Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.952352 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c2f02050-fdee-42d1-87c0-74104b2aa6bc-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-cxn6m\" (UID: \"c2f02050-fdee-42d1-87c0-74104b2aa6bc\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-cxn6m" Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.952503 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c2f02050-fdee-42d1-87c0-74104b2aa6bc-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-cxn6m\" (UID: \"c2f02050-fdee-42d1-87c0-74104b2aa6bc\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-cxn6m" Jan 30 06:57:02 crc kubenswrapper[4520]: E0130 06:57:02.952665 4520 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 06:57:02 crc kubenswrapper[4520]: E0130 06:57:02.952734 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2f02050-fdee-42d1-87c0-74104b2aa6bc-metrics-certs podName:c2f02050-fdee-42d1-87c0-74104b2aa6bc nodeName:}" failed. No retries permitted until 2026-01-30 06:57:03.952713961 +0000 UTC m=+737.581066141 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c2f02050-fdee-42d1-87c0-74104b2aa6bc-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-cxn6m" (UID: "c2f02050-fdee-42d1-87c0-74104b2aa6bc") : secret "metrics-server-cert" not found Jan 30 06:57:02 crc kubenswrapper[4520]: E0130 06:57:02.952787 4520 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 06:57:02 crc kubenswrapper[4520]: E0130 06:57:02.952822 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2f02050-fdee-42d1-87c0-74104b2aa6bc-webhook-certs podName:c2f02050-fdee-42d1-87c0-74104b2aa6bc nodeName:}" failed. No retries permitted until 2026-01-30 06:57:03.952813888 +0000 UTC m=+737.581166069 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c2f02050-fdee-42d1-87c0-74104b2aa6bc-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-cxn6m" (UID: "c2f02050-fdee-42d1-87c0-74104b2aa6bc") : secret "webhook-server-cert" not found Jan 30 06:57:02 crc kubenswrapper[4520]: I0130 06:57:02.957938 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-kh9zj"] Jan 30 06:57:03 crc kubenswrapper[4520]: I0130 06:57:03.053604 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-52h27"] Jan 30 06:57:03 crc kubenswrapper[4520]: I0130 06:57:03.053651 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-sws5x"] Jan 30 06:57:03 crc kubenswrapper[4520]: I0130 06:57:03.156768 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cdf5fc79-647e-4d70-8785-682d7f27ce10-cert\") pod \"infra-operator-controller-manager-79955696d6-jfrp7\" (UID: \"cdf5fc79-647e-4d70-8785-682d7f27ce10\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-jfrp7" Jan 30 06:57:03 crc kubenswrapper[4520]: E0130 06:57:03.156966 4520 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 06:57:03 crc kubenswrapper[4520]: E0130 06:57:03.157007 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdf5fc79-647e-4d70-8785-682d7f27ce10-cert podName:cdf5fc79-647e-4d70-8785-682d7f27ce10 nodeName:}" failed. No retries permitted until 2026-01-30 06:57:05.156994656 +0000 UTC m=+738.785346837 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cdf5fc79-647e-4d70-8785-682d7f27ce10-cert") pod "infra-operator-controller-manager-79955696d6-jfrp7" (UID: "cdf5fc79-647e-4d70-8785-682d7f27ce10") : secret "infra-operator-webhook-server-cert" not found Jan 30 06:57:03 crc kubenswrapper[4520]: I0130 06:57:03.180695 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-kh9zj" event={"ID":"27734711-fc9e-4ddf-acc0-47761e072c20","Type":"ContainerStarted","Data":"9ef10dba7ea45900dc4f5528583ad49f5e9b4af8603c4e7f450a2c1e1265ba3f"} Jan 30 06:57:03 crc kubenswrapper[4520]: I0130 06:57:03.182369 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-df2r7" event={"ID":"d34fd2f5-b868-4eb8-9708-48b5e31e1397","Type":"ContainerStarted","Data":"a502b85342aa4f0cb2641c7f6cdc1e017b03d78d54818afaaaf25de8ac644ca3"} Jan 30 06:57:03 crc kubenswrapper[4520]: I0130 06:57:03.183593 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-52h27" event={"ID":"cd83993b-94e2-438b-9f19-8179f70b4a0e","Type":"ContainerStarted","Data":"d87d59600125cf04a49bd2fffafadee2e15debcd5a4af2d211f79c99a468914c"} Jan 30 06:57:03 crc kubenswrapper[4520]: I0130 06:57:03.184523 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gms89" event={"ID":"7ac2569b-0787-4f14-9039-a7541c6123e6","Type":"ContainerStarted","Data":"0712ba62c451f0592a0a6333523648de5508907e114668781138f9821019467a"} Jan 30 06:57:03 crc kubenswrapper[4520]: I0130 06:57:03.185552 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-w4cwl" event={"ID":"ff428ec2-c3cf-413e-ac23-8fe55a37d261","Type":"ContainerStarted","Data":"9f230c510a59dc7abda6d12715ee70ed1764fe26b7d8b6bdab65ca5e403af1a6"} Jan 30 06:57:03 crc kubenswrapper[4520]: I0130 06:57:03.188896 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-9sjbr" event={"ID":"255320d6-1503-4351-ad06-7794cbbdd120","Type":"ContainerStarted","Data":"30d2fa17fb88f5529e8a4e99ee28466c55636c0330b9126953f7326cbf28fe69"} Jan 30 06:57:03 crc kubenswrapper[4520]: I0130 06:57:03.198215 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-kf987" event={"ID":"9ab80be7-f6c7-420b-996c-3a373886483f","Type":"ContainerStarted","Data":"2563d33b38f2d7973b278574fcd902770f6df4ba54a649b4fa6ebbffa1f4a97a"} Jan 30 06:57:03 crc kubenswrapper[4520]: I0130 06:57:03.202936 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-6hlhp" event={"ID":"fbf504d7-8829-43eb-983a-e7be0f5929ac","Type":"ContainerStarted","Data":"7b141b19e340930b9d993594e85d704e2c42ff1882c8d1b1c01e4d11b186a3e9"} Jan 30 06:57:03 crc kubenswrapper[4520]: I0130 06:57:03.207162 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-qklbf" event={"ID":"dda4dad2-f4d8-494e-9c59-28413625eb1d","Type":"ContainerStarted","Data":"4aab8c0ece4feb7456ce3bec2f65e6555f3be1ccb4c6468bdd77fc74533ca52f"} Jan 30 06:57:03 crc kubenswrapper[4520]: I0130 06:57:03.208354 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-cnhtx" event={"ID":"b7d9e5dd-5b3b-4aaa-834c-74029a7de138","Type":"ContainerStarted","Data":"be776cf4b97675c2381dc1d14c82f0039e4fc474accc794ef25c4c9aab6a6371"} Jan 30 06:57:03 crc kubenswrapper[4520]: I0130 06:57:03.210283 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-sws5x" event={"ID":"6bb8d69e-cfd3-4d0f-9c93-53716539e927","Type":"ContainerStarted","Data":"7aac19cceccaa9f65edef89ce65f9b9a0373426d9a5d28ea14d5e47a5cd30639"} Jan 30 06:57:03 crc kubenswrapper[4520]: I0130 06:57:03.217142 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-gfpw2" event={"ID":"8da917f8-3f81-4867-9a6f-ac261284771c","Type":"ContainerStarted","Data":"69c22bed706eefbd2682387fa8aa48957a10ceb8194f3ff9cdc044541a032da3"} Jan 30 06:57:03 crc kubenswrapper[4520]: I0130 06:57:03.361193 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-j2t49"] Jan 30 06:57:03 crc kubenswrapper[4520]: I0130 06:57:03.361245 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-l28fb"] Jan 30 06:57:03 crc kubenswrapper[4520]: I0130 06:57:03.426800 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-pmrw9"] Jan 30 06:57:03 crc kubenswrapper[4520]: W0130 06:57:03.438796 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c259a8d_a9cf_46c1_84b3_dbf5e2fb6e40.slice/crio-9a16c823168ea0b0cea9c116c6d959a6a3ef85ae8edcb764b16993d44fe20f5b WatchSource:0}: Error finding container 9a16c823168ea0b0cea9c116c6d959a6a3ef85ae8edcb764b16993d44fe20f5b: Status 404 returned error can't find the container with id 9a16c823168ea0b0cea9c116c6d959a6a3ef85ae8edcb764b16993d44fe20f5b Jan 30 06:57:03 crc kubenswrapper[4520]: E0130 06:57:03.448812 4520 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9wrzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-j2t49_openstack-operators(5c259a8d-a9cf-46c1-84b3-dbf5e2fb6e40): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 06:57:03 crc kubenswrapper[4520]: E0130 06:57:03.450209 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-j2t49" podUID="5c259a8d-a9cf-46c1-84b3-dbf5e2fb6e40" Jan 30 06:57:03 crc kubenswrapper[4520]: I0130 06:57:03.474022 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-2ppq4"] Jan 30 06:57:03 crc kubenswrapper[4520]: E0130 06:57:03.482410 4520 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hsmpp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-2ppq4_openstack-operators(cd6316df-91d5-4c8c-84ac-d02c952a05c9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 06:57:03 crc kubenswrapper[4520]: I0130 06:57:03.482670 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3099544c-3b89-415c-aea6-f56b7581a803-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq\" (UID: \"3099544c-3b89-415c-aea6-f56b7581a803\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq" Jan 30 06:57:03 crc kubenswrapper[4520]: E0130 06:57:03.482803 4520 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 06:57:03 crc kubenswrapper[4520]: E0130 06:57:03.482844 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3099544c-3b89-415c-aea6-f56b7581a803-cert podName:3099544c-3b89-415c-aea6-f56b7581a803 nodeName:}" failed. No retries permitted until 2026-01-30 06:57:05.482831353 +0000 UTC m=+739.111183524 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3099544c-3b89-415c-aea6-f56b7581a803-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq" (UID: "3099544c-3b89-415c-aea6-f56b7581a803") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 06:57:03 crc kubenswrapper[4520]: E0130 06:57:03.483657 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2ppq4" podUID="cd6316df-91d5-4c8c-84ac-d02c952a05c9" Jan 30 06:57:03 crc kubenswrapper[4520]: I0130 06:57:03.496707 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-2t4v6"] Jan 30 06:57:03 crc kubenswrapper[4520]: I0130 06:57:03.511434 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-6ksdm"] Jan 30 06:57:03 crc kubenswrapper[4520]: E0130 06:57:03.512010 4520 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p9cqd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-2t4v6_openstack-operators(06701a52-2501-4045-b254-90b886c11b47): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 06:57:03 crc kubenswrapper[4520]: E0130 06:57:03.513978 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2t4v6" podUID="06701a52-2501-4045-b254-90b886c11b47" Jan 30 06:57:03 crc kubenswrapper[4520]: I0130 06:57:03.515710 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zr9jd"] Jan 30 06:57:03 crc kubenswrapper[4520]: I0130 06:57:03.523443 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-rm676"] Jan 30 06:57:03 crc kubenswrapper[4520]: E0130 06:57:03.559832 4520 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jskwv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-6ksdm_openstack-operators(45285265-5fe0-4c19-a169-fe2598b27a5d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 06:57:03 crc kubenswrapper[4520]: E0130 06:57:03.560011 4520 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m9fp4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-rm676_openstack-operators(80a81bc2-ebfd-4fa9-80ed-ddb70fb32677): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 06:57:03 crc kubenswrapper[4520]: E0130 06:57:03.560952 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-6ksdm" podUID="45285265-5fe0-4c19-a169-fe2598b27a5d" Jan 30 06:57:03 crc kubenswrapper[4520]: E0130 06:57:03.561113 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rm676" podUID="80a81bc2-ebfd-4fa9-80ed-ddb70fb32677" Jan 30 06:57:03 crc kubenswrapper[4520]: I0130 06:57:03.992641 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c2f02050-fdee-42d1-87c0-74104b2aa6bc-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-cxn6m\" (UID: \"c2f02050-fdee-42d1-87c0-74104b2aa6bc\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-cxn6m" Jan 30 06:57:03 crc kubenswrapper[4520]: I0130 06:57:03.992770 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c2f02050-fdee-42d1-87c0-74104b2aa6bc-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-cxn6m\" (UID: \"c2f02050-fdee-42d1-87c0-74104b2aa6bc\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-cxn6m" Jan 30 06:57:03 crc kubenswrapper[4520]: E0130 06:57:03.993035 4520 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 06:57:03 crc kubenswrapper[4520]: E0130 06:57:03.993134 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2f02050-fdee-42d1-87c0-74104b2aa6bc-webhook-certs podName:c2f02050-fdee-42d1-87c0-74104b2aa6bc nodeName:}" failed. No retries permitted until 2026-01-30 06:57:05.993116222 +0000 UTC m=+739.621468394 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c2f02050-fdee-42d1-87c0-74104b2aa6bc-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-cxn6m" (UID: "c2f02050-fdee-42d1-87c0-74104b2aa6bc") : secret "webhook-server-cert" not found Jan 30 06:57:03 crc kubenswrapper[4520]: E0130 06:57:03.993663 4520 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 06:57:03 crc kubenswrapper[4520]: E0130 06:57:03.994097 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2f02050-fdee-42d1-87c0-74104b2aa6bc-metrics-certs podName:c2f02050-fdee-42d1-87c0-74104b2aa6bc nodeName:}" failed. No retries permitted until 2026-01-30 06:57:05.99406704 +0000 UTC m=+739.622419222 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c2f02050-fdee-42d1-87c0-74104b2aa6bc-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-cxn6m" (UID: "c2f02050-fdee-42d1-87c0-74104b2aa6bc") : secret "metrics-server-cert" not found Jan 30 06:57:04 crc kubenswrapper[4520]: I0130 06:57:04.404294 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-pmrw9" event={"ID":"4e1b6bdd-a23f-4023-861a-e28c2dd5e640","Type":"ContainerStarted","Data":"f4ceac03333b7784d093924ceb259de0a376101fd44eea32f5e0ba377e4eaece"} Jan 30 06:57:04 crc kubenswrapper[4520]: I0130 06:57:04.407197 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2t4v6" event={"ID":"06701a52-2501-4045-b254-90b886c11b47","Type":"ContainerStarted","Data":"3098052d333b02fc84d36354c9c58d3812ea5dc63a426a325581840a86cf3ece"} Jan 30 06:57:04 crc kubenswrapper[4520]: E0130 06:57:04.427351 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2t4v6" podUID="06701a52-2501-4045-b254-90b886c11b47" Jan 30 06:57:04 crc kubenswrapper[4520]: I0130 06:57:04.434632 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zr9jd" event={"ID":"6b8a65a3-bc6c-473d-892f-4d80011c854f","Type":"ContainerStarted","Data":"b054923b3618d10e8814f0e70560bb4bd63c1f7216069c1c89b85e9580362aeb"} Jan 30 06:57:04 crc kubenswrapper[4520]: I0130 06:57:04.467815 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-j2t49" event={"ID":"5c259a8d-a9cf-46c1-84b3-dbf5e2fb6e40","Type":"ContainerStarted","Data":"9a16c823168ea0b0cea9c116c6d959a6a3ef85ae8edcb764b16993d44fe20f5b"} Jan 30 06:57:04 crc kubenswrapper[4520]: E0130 06:57:04.473502 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-j2t49" podUID="5c259a8d-a9cf-46c1-84b3-dbf5e2fb6e40" Jan 30 06:57:04 crc kubenswrapper[4520]: I0130 06:57:04.487801 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-6ksdm" event={"ID":"45285265-5fe0-4c19-a169-fe2598b27a5d","Type":"ContainerStarted","Data":"994fcd1ca16b087cd35460e433e32199e064d529e86714e65d8672bfda616889"} Jan 30 06:57:04 crc kubenswrapper[4520]: E0130 06:57:04.497209 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-6ksdm" podUID="45285265-5fe0-4c19-a169-fe2598b27a5d" Jan 30 06:57:04 crc kubenswrapper[4520]: I0130 06:57:04.513698 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2ppq4" event={"ID":"cd6316df-91d5-4c8c-84ac-d02c952a05c9","Type":"ContainerStarted","Data":"66e32d3919951eb63d0a986f368152c46d7e59ffed256761c2b1001ba745eecc"} Jan 30 06:57:04 crc kubenswrapper[4520]: E0130 06:57:04.521914 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2ppq4" podUID="cd6316df-91d5-4c8c-84ac-d02c952a05c9" Jan 30 06:57:04 crc kubenswrapper[4520]: I0130 06:57:04.546103 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rm676" event={"ID":"80a81bc2-ebfd-4fa9-80ed-ddb70fb32677","Type":"ContainerStarted","Data":"22bbace56971ecf034d464e65215c99fda1bd546e091c34b788e2b16e6dbe4c2"} Jan 30 06:57:04 crc kubenswrapper[4520]: E0130 06:57:04.553303 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rm676" podUID="80a81bc2-ebfd-4fa9-80ed-ddb70fb32677" Jan 30 06:57:04 crc kubenswrapper[4520]: I0130 06:57:04.558214 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l28fb" event={"ID":"c2d1bf96-9105-4d5d-8dcd-174c098c76d9","Type":"ContainerStarted","Data":"d7fd19b9029ab9c3c8a8362677774f0fea4674006ded3ba62506d14d6a18abe3"} Jan 30 06:57:05 crc kubenswrapper[4520]: I0130 06:57:05.238970 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cdf5fc79-647e-4d70-8785-682d7f27ce10-cert\") pod \"infra-operator-controller-manager-79955696d6-jfrp7\" (UID: \"cdf5fc79-647e-4d70-8785-682d7f27ce10\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-jfrp7" Jan 30 06:57:05 crc kubenswrapper[4520]: E0130 06:57:05.239165 4520 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 06:57:05 crc kubenswrapper[4520]: E0130 06:57:05.239209 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdf5fc79-647e-4d70-8785-682d7f27ce10-cert podName:cdf5fc79-647e-4d70-8785-682d7f27ce10 nodeName:}" failed. No retries permitted until 2026-01-30 06:57:09.239195777 +0000 UTC m=+742.867547957 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cdf5fc79-647e-4d70-8785-682d7f27ce10-cert") pod "infra-operator-controller-manager-79955696d6-jfrp7" (UID: "cdf5fc79-647e-4d70-8785-682d7f27ce10") : secret "infra-operator-webhook-server-cert" not found Jan 30 06:57:05 crc kubenswrapper[4520]: I0130 06:57:05.550703 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3099544c-3b89-415c-aea6-f56b7581a803-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq\" (UID: \"3099544c-3b89-415c-aea6-f56b7581a803\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq" Jan 30 06:57:05 crc kubenswrapper[4520]: E0130 06:57:05.550837 4520 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 06:57:05 crc kubenswrapper[4520]: E0130 06:57:05.550881 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3099544c-3b89-415c-aea6-f56b7581a803-cert podName:3099544c-3b89-415c-aea6-f56b7581a803 nodeName:}" failed. No retries permitted until 2026-01-30 06:57:09.550867424 +0000 UTC m=+743.179219596 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3099544c-3b89-415c-aea6-f56b7581a803-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq" (UID: "3099544c-3b89-415c-aea6-f56b7581a803") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 06:57:05 crc kubenswrapper[4520]: E0130 06:57:05.583771 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-j2t49" podUID="5c259a8d-a9cf-46c1-84b3-dbf5e2fb6e40" Jan 30 06:57:05 crc kubenswrapper[4520]: E0130 06:57:05.583837 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-6ksdm" podUID="45285265-5fe0-4c19-a169-fe2598b27a5d" Jan 30 06:57:05 crc kubenswrapper[4520]: E0130 06:57:05.583871 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rm676" podUID="80a81bc2-ebfd-4fa9-80ed-ddb70fb32677" Jan 30 06:57:05 crc kubenswrapper[4520]: E0130 06:57:05.583902 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2ppq4" podUID="cd6316df-91d5-4c8c-84ac-d02c952a05c9" Jan 30 06:57:05 crc kubenswrapper[4520]: E0130 06:57:05.588473 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2t4v6" podUID="06701a52-2501-4045-b254-90b886c11b47" Jan 30 06:57:06 crc kubenswrapper[4520]: I0130 06:57:06.064045 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c2f02050-fdee-42d1-87c0-74104b2aa6bc-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-cxn6m\" (UID: \"c2f02050-fdee-42d1-87c0-74104b2aa6bc\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-cxn6m" Jan 30 06:57:06 crc kubenswrapper[4520]: I0130 06:57:06.064138 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c2f02050-fdee-42d1-87c0-74104b2aa6bc-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-cxn6m\" (UID: \"c2f02050-fdee-42d1-87c0-74104b2aa6bc\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-cxn6m" Jan 30 06:57:06 crc kubenswrapper[4520]: E0130 06:57:06.064221 4520 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 06:57:06 crc kubenswrapper[4520]: E0130 06:57:06.064299 4520 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 06:57:06 crc kubenswrapper[4520]: E0130 06:57:06.064308 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2f02050-fdee-42d1-87c0-74104b2aa6bc-metrics-certs podName:c2f02050-fdee-42d1-87c0-74104b2aa6bc nodeName:}" failed. No retries permitted until 2026-01-30 06:57:10.064290685 +0000 UTC m=+743.692642866 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c2f02050-fdee-42d1-87c0-74104b2aa6bc-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-cxn6m" (UID: "c2f02050-fdee-42d1-87c0-74104b2aa6bc") : secret "metrics-server-cert" not found Jan 30 06:57:06 crc kubenswrapper[4520]: E0130 06:57:06.064389 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2f02050-fdee-42d1-87c0-74104b2aa6bc-webhook-certs podName:c2f02050-fdee-42d1-87c0-74104b2aa6bc nodeName:}" failed. No retries permitted until 2026-01-30 06:57:10.064373019 +0000 UTC m=+743.692725190 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c2f02050-fdee-42d1-87c0-74104b2aa6bc-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-cxn6m" (UID: "c2f02050-fdee-42d1-87c0-74104b2aa6bc") : secret "webhook-server-cert" not found Jan 30 06:57:09 crc kubenswrapper[4520]: I0130 06:57:09.329006 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cdf5fc79-647e-4d70-8785-682d7f27ce10-cert\") pod \"infra-operator-controller-manager-79955696d6-jfrp7\" (UID: \"cdf5fc79-647e-4d70-8785-682d7f27ce10\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-jfrp7" Jan 30 06:57:09 crc kubenswrapper[4520]: E0130 06:57:09.329253 4520 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 06:57:09 crc kubenswrapper[4520]: E0130 06:57:09.329301 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdf5fc79-647e-4d70-8785-682d7f27ce10-cert podName:cdf5fc79-647e-4d70-8785-682d7f27ce10 nodeName:}" failed. No retries permitted until 2026-01-30 06:57:17.329285869 +0000 UTC m=+750.957638050 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cdf5fc79-647e-4d70-8785-682d7f27ce10-cert") pod "infra-operator-controller-manager-79955696d6-jfrp7" (UID: "cdf5fc79-647e-4d70-8785-682d7f27ce10") : secret "infra-operator-webhook-server-cert" not found Jan 30 06:57:09 crc kubenswrapper[4520]: I0130 06:57:09.632732 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3099544c-3b89-415c-aea6-f56b7581a803-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq\" (UID: \"3099544c-3b89-415c-aea6-f56b7581a803\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq" Jan 30 06:57:09 crc kubenswrapper[4520]: E0130 06:57:09.632928 4520 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 06:57:09 crc kubenswrapper[4520]: E0130 06:57:09.633003 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3099544c-3b89-415c-aea6-f56b7581a803-cert podName:3099544c-3b89-415c-aea6-f56b7581a803 nodeName:}" failed. No retries permitted until 2026-01-30 06:57:17.632984539 +0000 UTC m=+751.261336720 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3099544c-3b89-415c-aea6-f56b7581a803-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq" (UID: "3099544c-3b89-415c-aea6-f56b7581a803") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 06:57:10 crc kubenswrapper[4520]: I0130 06:57:10.139567 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c2f02050-fdee-42d1-87c0-74104b2aa6bc-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-cxn6m\" (UID: \"c2f02050-fdee-42d1-87c0-74104b2aa6bc\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-cxn6m" Jan 30 06:57:10 crc kubenswrapper[4520]: E0130 06:57:10.139746 4520 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 06:57:10 crc kubenswrapper[4520]: I0130 06:57:10.140061 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c2f02050-fdee-42d1-87c0-74104b2aa6bc-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-cxn6m\" (UID: \"c2f02050-fdee-42d1-87c0-74104b2aa6bc\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-cxn6m" Jan 30 06:57:10 crc kubenswrapper[4520]: E0130 06:57:10.140076 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2f02050-fdee-42d1-87c0-74104b2aa6bc-webhook-certs podName:c2f02050-fdee-42d1-87c0-74104b2aa6bc nodeName:}" failed. No retries permitted until 2026-01-30 06:57:18.140059395 +0000 UTC m=+751.768411576 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c2f02050-fdee-42d1-87c0-74104b2aa6bc-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-cxn6m" (UID: "c2f02050-fdee-42d1-87c0-74104b2aa6bc") : secret "webhook-server-cert" not found Jan 30 06:57:10 crc kubenswrapper[4520]: E0130 06:57:10.140208 4520 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 06:57:10 crc kubenswrapper[4520]: E0130 06:57:10.140256 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2f02050-fdee-42d1-87c0-74104b2aa6bc-metrics-certs podName:c2f02050-fdee-42d1-87c0-74104b2aa6bc nodeName:}" failed. No retries permitted until 2026-01-30 06:57:18.140246136 +0000 UTC m=+751.768598316 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c2f02050-fdee-42d1-87c0-74104b2aa6bc-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-cxn6m" (UID: "c2f02050-fdee-42d1-87c0-74104b2aa6bc") : secret "metrics-server-cert" not found Jan 30 06:57:16 crc kubenswrapper[4520]: E0130 06:57:16.246930 4520 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382" Jan 30 06:57:16 crc kubenswrapper[4520]: E0130 06:57:16.248188 4520 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-67kg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d9697b7f4-w4cwl_openstack-operators(ff428ec2-c3cf-413e-ac23-8fe55a37d261): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 06:57:16 crc kubenswrapper[4520]: E0130 06:57:16.250244 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-w4cwl" podUID="ff428ec2-c3cf-413e-ac23-8fe55a37d261" Jan 30 06:57:16 crc kubenswrapper[4520]: E0130 06:57:16.660920 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-w4cwl" podUID="ff428ec2-c3cf-413e-ac23-8fe55a37d261" Jan 30 06:57:16 crc kubenswrapper[4520]: E0130 06:57:16.866915 4520 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:379470e2752f286e73908e94233e884922b231169a5521a59f53843a2dc3184c" Jan 30 06:57:16 crc kubenswrapper[4520]: E0130 06:57:16.867098 4520 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:379470e2752f286e73908e94233e884922b231169a5521a59f53843a2dc3184c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ljr7w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7b6c4d8c5f-df2r7_openstack-operators(d34fd2f5-b868-4eb8-9708-48b5e31e1397): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 06:57:16 crc kubenswrapper[4520]: E0130 06:57:16.868963 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-df2r7" podUID="d34fd2f5-b868-4eb8-9708-48b5e31e1397" Jan 30 06:57:17 crc kubenswrapper[4520]: I0130 06:57:17.352875 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cdf5fc79-647e-4d70-8785-682d7f27ce10-cert\") pod \"infra-operator-controller-manager-79955696d6-jfrp7\" (UID: \"cdf5fc79-647e-4d70-8785-682d7f27ce10\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-jfrp7" Jan 30 06:57:17 crc kubenswrapper[4520]: E0130 06:57:17.353249 4520 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 06:57:17 crc kubenswrapper[4520]: E0130 06:57:17.353377 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdf5fc79-647e-4d70-8785-682d7f27ce10-cert podName:cdf5fc79-647e-4d70-8785-682d7f27ce10 nodeName:}" failed. No retries permitted until 2026-01-30 06:57:33.353334525 +0000 UTC m=+766.981686706 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cdf5fc79-647e-4d70-8785-682d7f27ce10-cert") pod "infra-operator-controller-manager-79955696d6-jfrp7" (UID: "cdf5fc79-647e-4d70-8785-682d7f27ce10") : secret "infra-operator-webhook-server-cert" not found Jan 30 06:57:17 crc kubenswrapper[4520]: I0130 06:57:17.655582 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3099544c-3b89-415c-aea6-f56b7581a803-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq\" (UID: \"3099544c-3b89-415c-aea6-f56b7581a803\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq" Jan 30 06:57:17 crc kubenswrapper[4520]: I0130 06:57:17.663502 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3099544c-3b89-415c-aea6-f56b7581a803-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq\" (UID: \"3099544c-3b89-415c-aea6-f56b7581a803\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq" Jan 30 06:57:17 crc kubenswrapper[4520]: E0130 06:57:17.672042 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:379470e2752f286e73908e94233e884922b231169a5521a59f53843a2dc3184c\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-df2r7" podUID="d34fd2f5-b868-4eb8-9708-48b5e31e1397" Jan 30 06:57:17 crc kubenswrapper[4520]: I0130 06:57:17.870164 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq" Jan 30 06:57:18 crc kubenswrapper[4520]: E0130 06:57:18.032780 4520 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6" Jan 30 06:57:18 crc kubenswrapper[4520]: E0130 06:57:18.033089 4520 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7qrsz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-585dbc889-l28fb_openstack-operators(c2d1bf96-9105-4d5d-8dcd-174c098c76d9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 06:57:18 crc kubenswrapper[4520]: E0130 06:57:18.034317 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l28fb" podUID="c2d1bf96-9105-4d5d-8dcd-174c098c76d9" Jan 30 06:57:18 crc kubenswrapper[4520]: I0130 06:57:18.163402 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c2f02050-fdee-42d1-87c0-74104b2aa6bc-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-cxn6m\" (UID: \"c2f02050-fdee-42d1-87c0-74104b2aa6bc\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-cxn6m" Jan 30 06:57:18 crc kubenswrapper[4520]: I0130 06:57:18.165878 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c2f02050-fdee-42d1-87c0-74104b2aa6bc-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-cxn6m\" (UID: \"c2f02050-fdee-42d1-87c0-74104b2aa6bc\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-cxn6m" Jan 30 06:57:18 crc kubenswrapper[4520]: I0130 06:57:18.171861 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c2f02050-fdee-42d1-87c0-74104b2aa6bc-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-cxn6m\" (UID: \"c2f02050-fdee-42d1-87c0-74104b2aa6bc\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-cxn6m" Jan 30 06:57:18 crc kubenswrapper[4520]: I0130 06:57:18.175049 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c2f02050-fdee-42d1-87c0-74104b2aa6bc-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-cxn6m\" (UID: \"c2f02050-fdee-42d1-87c0-74104b2aa6bc\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-cxn6m" Jan 30 06:57:18 crc kubenswrapper[4520]: I0130 06:57:18.436815 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-cxn6m" Jan 30 06:57:18 crc kubenswrapper[4520]: E0130 06:57:18.678891 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l28fb" podUID="c2d1bf96-9105-4d5d-8dcd-174c098c76d9" Jan 30 06:57:18 crc kubenswrapper[4520]: E0130 06:57:18.686326 4520 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10" Jan 30 06:57:18 crc kubenswrapper[4520]: E0130 06:57:18.686461 4520 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wx487,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-69d6db494d-6hlhp_openstack-operators(fbf504d7-8829-43eb-983a-e7be0f5929ac): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 06:57:18 crc kubenswrapper[4520]: E0130 06:57:18.688628 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-6hlhp" podUID="fbf504d7-8829-43eb-983a-e7be0f5929ac" Jan 30 06:57:19 crc kubenswrapper[4520]: E0130 06:57:19.692833 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10\\\"\"" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-6hlhp" podUID="fbf504d7-8829-43eb-983a-e7be0f5929ac" Jan 30 06:57:20 crc kubenswrapper[4520]: E0130 06:57:20.859189 4520 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 30 06:57:20 crc kubenswrapper[4520]: E0130 06:57:20.859342 4520 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pnm8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-zr9jd_openstack-operators(6b8a65a3-bc6c-473d-892f-4d80011c854f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 06:57:20 crc kubenswrapper[4520]: E0130 06:57:20.860532 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zr9jd" podUID="6b8a65a3-bc6c-473d-892f-4d80011c854f" Jan 30 06:57:21 crc kubenswrapper[4520]: E0130 06:57:21.704640 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zr9jd" podUID="6b8a65a3-bc6c-473d-892f-4d80011c854f" Jan 30 06:57:22 crc kubenswrapper[4520]: E0130 06:57:22.184205 4520 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a" Jan 30 06:57:22 crc kubenswrapper[4520]: E0130 06:57:22.184531 4520 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2kgt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-pmrw9_openstack-operators(4e1b6bdd-a23f-4023-861a-e28c2dd5e640): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 06:57:22 crc kubenswrapper[4520]: E0130 06:57:22.185759 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-pmrw9" podUID="4e1b6bdd-a23f-4023-861a-e28c2dd5e640" Jan 30 06:57:22 crc kubenswrapper[4520]: E0130 06:57:22.703434 4520 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566" Jan 30 06:57:22 crc kubenswrapper[4520]: E0130 06:57:22.703651 4520 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4b5b9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7dd968899f-52h27_openstack-operators(cd83993b-94e2-438b-9f19-8179f70b4a0e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 06:57:22 crc kubenswrapper[4520]: E0130 06:57:22.704848 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-52h27" podUID="cd83993b-94e2-438b-9f19-8179f70b4a0e" Jan 30 06:57:22 crc kubenswrapper[4520]: E0130 06:57:22.708369 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-pmrw9" podUID="4e1b6bdd-a23f-4023-861a-e28c2dd5e640" Jan 30 06:57:23 crc kubenswrapper[4520]: E0130 06:57:23.263933 4520 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e" Jan 30 06:57:23 crc kubenswrapper[4520]: E0130 06:57:23.264101 4520 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t7mww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-9sjbr_openstack-operators(255320d6-1503-4351-ad06-7794cbbdd120): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 06:57:23 crc kubenswrapper[4520]: E0130 06:57:23.265255 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-9sjbr" podUID="255320d6-1503-4351-ad06-7794cbbdd120" Jan 30 06:57:23 crc kubenswrapper[4520]: E0130 06:57:23.718755 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-52h27" podUID="cd83993b-94e2-438b-9f19-8179f70b4a0e" Jan 30 06:57:23 crc kubenswrapper[4520]: E0130 06:57:23.719051 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-9sjbr" podUID="255320d6-1503-4351-ad06-7794cbbdd120" Jan 30 06:57:23 crc kubenswrapper[4520]: E0130 06:57:23.805384 4520 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17" Jan 30 06:57:23 crc kubenswrapper[4520]: E0130 06:57:23.805636 4520 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7t7x8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-cnhtx_openstack-operators(b7d9e5dd-5b3b-4aaa-834c-74029a7de138): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 06:57:23 crc kubenswrapper[4520]: E0130 06:57:23.806922 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-cnhtx" podUID="b7d9e5dd-5b3b-4aaa-834c-74029a7de138" Jan 30 06:57:24 crc kubenswrapper[4520]: E0130 06:57:24.722207 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-cnhtx" podUID="b7d9e5dd-5b3b-4aaa-834c-74029a7de138" Jan 30 06:57:28 crc kubenswrapper[4520]: I0130 06:57:28.337627 4520 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 06:57:29 crc kubenswrapper[4520]: I0130 06:57:29.422951 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6b6f655c79-cxn6m"] Jan 30 06:57:29 crc kubenswrapper[4520]: I0130 06:57:29.489354 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq"] Jan 30 06:57:29 crc kubenswrapper[4520]: I0130 06:57:29.763101 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-df2r7" event={"ID":"d34fd2f5-b868-4eb8-9708-48b5e31e1397","Type":"ContainerStarted","Data":"9a04cdd40eab6e550f8d8c0dc721b87f1de97b5d237403482a4d2a9891b78fc5"} Jan 30 06:57:29 crc kubenswrapper[4520]: I0130 06:57:29.764437 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-df2r7" Jan 30 06:57:29 crc kubenswrapper[4520]: I0130 06:57:29.765552 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-6ksdm" event={"ID":"45285265-5fe0-4c19-a169-fe2598b27a5d","Type":"ContainerStarted","Data":"9a5e361748817c8d59359e689943e8c268dd519f87913f6d78f724b49432cc1f"} Jan 30 06:57:29 crc kubenswrapper[4520]: I0130 06:57:29.765951 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-6ksdm" Jan 30 06:57:29 crc kubenswrapper[4520]: I0130 06:57:29.767371 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gms89" event={"ID":"7ac2569b-0787-4f14-9039-a7541c6123e6","Type":"ContainerStarted","Data":"8946f9f06a2b44426cfb0649699173a25ebbae1f23d484ed4543d17f4e03ddc9"} Jan 30 06:57:29 crc kubenswrapper[4520]: I0130 06:57:29.767777 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gms89" Jan 30 06:57:29 crc kubenswrapper[4520]: I0130 06:57:29.768909 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-kh9zj" event={"ID":"27734711-fc9e-4ddf-acc0-47761e072c20","Type":"ContainerStarted","Data":"9c54c603fe0860c229d1f20a7f69df5b6fef12c829d432d05be0e8a92cc38b5f"} Jan 30 06:57:29 crc kubenswrapper[4520]: I0130 06:57:29.769280 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-kh9zj" Jan 30 06:57:29 crc kubenswrapper[4520]: I0130 06:57:29.770371 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-gfpw2" event={"ID":"8da917f8-3f81-4867-9a6f-ac261284771c","Type":"ContainerStarted","Data":"baca17cec8177f2205ebb6493e8d937c5165b7ed5b65383619328e4a0799e9cc"} Jan 30 06:57:29 crc kubenswrapper[4520]: I0130 06:57:29.770740 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-gfpw2" Jan 30 06:57:29 crc kubenswrapper[4520]: I0130 06:57:29.771453 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq" event={"ID":"3099544c-3b89-415c-aea6-f56b7581a803","Type":"ContainerStarted","Data":"9a84079fc62c4918a71c2f9d6beaed4a901fc15dcc32fa8a764b50ee14a3fd7c"} Jan 30 06:57:29 crc kubenswrapper[4520]: I0130 06:57:29.772597 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2ppq4" event={"ID":"cd6316df-91d5-4c8c-84ac-d02c952a05c9","Type":"ContainerStarted","Data":"9bf7283da17e7b54da99b7df089a17a7a4e5e29b687fccbf26b3891f6256cdf9"} Jan 30 06:57:29 crc kubenswrapper[4520]: I0130 06:57:29.772923 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2ppq4" Jan 30 06:57:29 crc kubenswrapper[4520]: I0130 06:57:29.773572 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rm676" event={"ID":"80a81bc2-ebfd-4fa9-80ed-ddb70fb32677","Type":"ContainerStarted","Data":"d960be6d59e728f9f4cbeda21cf7f5eb249a5bd6a1baadc43abe13a9426c290c"} Jan 30 06:57:29 crc kubenswrapper[4520]: I0130 06:57:29.773727 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rm676" Jan 30 06:57:29 crc kubenswrapper[4520]: I0130 06:57:29.774575 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-sws5x" event={"ID":"6bb8d69e-cfd3-4d0f-9c93-53716539e927","Type":"ContainerStarted","Data":"b9fdcde0e39921c48d9c6a0d235c290760ae0b4a2d94a6bacfd4878a3fcc0ead"} Jan 30 06:57:29 crc kubenswrapper[4520]: I0130 06:57:29.774697 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-sws5x" Jan 30 06:57:29 crc kubenswrapper[4520]: I0130 06:57:29.775569 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-kf987" event={"ID":"9ab80be7-f6c7-420b-996c-3a373886483f","Type":"ContainerStarted","Data":"873d736f81e06117cca79530c5cdba6ac3cacea08de257ec72e2c85961f90ee7"} Jan 30 06:57:29 crc kubenswrapper[4520]: I0130 06:57:29.775705 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-kf987" Jan 30 06:57:29 crc kubenswrapper[4520]: I0130 06:57:29.776734 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-qklbf" event={"ID":"dda4dad2-f4d8-494e-9c59-28413625eb1d","Type":"ContainerStarted","Data":"41379943db021e0057e700d18458b9565ad7e72fb175234eb2245851bdedf1f8"} Jan 30 06:57:29 crc kubenswrapper[4520]: I0130 06:57:29.777066 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-qklbf" Jan 30 06:57:29 crc kubenswrapper[4520]: I0130 06:57:29.778019 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2t4v6" event={"ID":"06701a52-2501-4045-b254-90b886c11b47","Type":"ContainerStarted","Data":"e60a65f18efe65406620d741bf43c6e6b7f646778c9d281de08ddee9c3deb33c"} Jan 30 06:57:29 crc kubenswrapper[4520]: I0130 06:57:29.778373 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2t4v6" Jan 30 06:57:29 crc kubenswrapper[4520]: I0130 06:57:29.779406 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-j2t49" event={"ID":"5c259a8d-a9cf-46c1-84b3-dbf5e2fb6e40","Type":"ContainerStarted","Data":"aaa9108d0fe7e99300f18ca89392dc5037e0c1058a7ff84d875e3006f232e292"} Jan 30 06:57:29 crc kubenswrapper[4520]: I0130 06:57:29.781274 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-j2t49" Jan 30 06:57:29 crc kubenswrapper[4520]: I0130 06:57:29.782004 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-cxn6m" event={"ID":"c2f02050-fdee-42d1-87c0-74104b2aa6bc","Type":"ContainerStarted","Data":"5fb4afc03f93bd075d1c48bd89bec23280f109cccc79dfe8588e66190746c1b1"} Jan 30 06:57:29 crc kubenswrapper[4520]: I0130 06:57:29.829909 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-df2r7" podStartSLOduration=2.051088894 podStartE2EDuration="28.829893564s" podCreationTimestamp="2026-01-30 06:57:01 +0000 UTC" firstStartedPulling="2026-01-30 06:57:02.512284701 +0000 UTC m=+736.140636882" lastFinishedPulling="2026-01-30 06:57:29.29108937 +0000 UTC m=+762.919441552" observedRunningTime="2026-01-30 06:57:29.825023077 +0000 UTC m=+763.453375259" watchObservedRunningTime="2026-01-30 06:57:29.829893564 +0000 UTC m=+763.458245744" Jan 30 06:57:29 crc kubenswrapper[4520]: I0130 06:57:29.925441 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-gfpw2" podStartSLOduration=7.708958852 podStartE2EDuration="28.925419933s" podCreationTimestamp="2026-01-30 06:57:01 +0000 UTC" firstStartedPulling="2026-01-30 06:57:02.575591822 +0000 UTC m=+736.203944003" lastFinishedPulling="2026-01-30 06:57:23.792052903 +0000 UTC m=+757.420405084" observedRunningTime="2026-01-30 06:57:29.861876809 +0000 UTC m=+763.490228990" watchObservedRunningTime="2026-01-30 06:57:29.925419933 +0000 UTC m=+763.553772114" Jan 30 06:57:29 crc kubenswrapper[4520]: I0130 06:57:29.945204 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-kf987" podStartSLOduration=7.752919324 podStartE2EDuration="28.945189708s" podCreationTimestamp="2026-01-30 06:57:01 +0000 UTC" firstStartedPulling="2026-01-30 06:57:02.599016428 +0000 UTC m=+736.227368609" lastFinishedPulling="2026-01-30 06:57:23.791286812 +0000 UTC m=+757.419638993" observedRunningTime="2026-01-30 06:57:29.88706526 +0000 UTC m=+763.515417442" watchObservedRunningTime="2026-01-30 06:57:29.945189708 +0000 UTC m=+763.573541888" Jan 30 06:57:29 crc kubenswrapper[4520]: I0130 06:57:29.959085 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2t4v6" podStartSLOduration=3.506841701 podStartE2EDuration="28.959072736s" podCreationTimestamp="2026-01-30 06:57:01 +0000 UTC" firstStartedPulling="2026-01-30 06:57:03.511899879 +0000 UTC m=+737.140252060" lastFinishedPulling="2026-01-30 06:57:28.964130914 +0000 UTC m=+762.592483095" observedRunningTime="2026-01-30 06:57:29.938252086 +0000 UTC m=+763.566604267" watchObservedRunningTime="2026-01-30 06:57:29.959072736 +0000 UTC m=+763.587424918" Jan 30 06:57:29 crc kubenswrapper[4520]: I0130 06:57:29.985849 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-qklbf" podStartSLOduration=7.707124543 podStartE2EDuration="28.985823915s" podCreationTimestamp="2026-01-30 06:57:01 +0000 UTC" firstStartedPulling="2026-01-30 06:57:02.512259223 +0000 UTC m=+736.140611404" lastFinishedPulling="2026-01-30 06:57:23.790958595 +0000 UTC m=+757.419310776" observedRunningTime="2026-01-30 06:57:29.983046614 +0000 UTC m=+763.611398786" watchObservedRunningTime="2026-01-30 06:57:29.985823915 +0000 UTC m=+763.614176096" Jan 30 06:57:30 crc kubenswrapper[4520]: I0130 06:57:30.038162 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2ppq4" podStartSLOduration=3.486509948 podStartE2EDuration="29.038143991s" podCreationTimestamp="2026-01-30 06:57:01 +0000 UTC" firstStartedPulling="2026-01-30 06:57:03.482278614 +0000 UTC m=+737.110630784" lastFinishedPulling="2026-01-30 06:57:29.033912647 +0000 UTC m=+762.662264827" observedRunningTime="2026-01-30 06:57:30.035102283 +0000 UTC m=+763.663454465" watchObservedRunningTime="2026-01-30 06:57:30.038143991 +0000 UTC m=+763.666496172" Jan 30 06:57:30 crc kubenswrapper[4520]: I0130 06:57:30.059184 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gms89" podStartSLOduration=7.900497886 podStartE2EDuration="29.059168736s" podCreationTimestamp="2026-01-30 06:57:01 +0000 UTC" firstStartedPulling="2026-01-30 06:57:02.632384186 +0000 UTC m=+736.260736367" lastFinishedPulling="2026-01-30 06:57:23.791055036 +0000 UTC m=+757.419407217" observedRunningTime="2026-01-30 06:57:30.054711607 +0000 UTC m=+763.683063788" watchObservedRunningTime="2026-01-30 06:57:30.059168736 +0000 UTC m=+763.687520916" Jan 30 06:57:30 crc kubenswrapper[4520]: I0130 06:57:30.071720 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rm676" podStartSLOduration=3.673129445 podStartE2EDuration="29.071713279s" podCreationTimestamp="2026-01-30 06:57:01 +0000 UTC" firstStartedPulling="2026-01-30 06:57:03.559893482 +0000 UTC m=+737.188245663" lastFinishedPulling="2026-01-30 06:57:28.958477316 +0000 UTC m=+762.586829497" observedRunningTime="2026-01-30 06:57:30.069228207 +0000 UTC m=+763.697580378" watchObservedRunningTime="2026-01-30 06:57:30.071713279 +0000 UTC m=+763.700065459" Jan 30 06:57:30 crc kubenswrapper[4520]: I0130 06:57:30.089425 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-kh9zj" podStartSLOduration=4.310484241 podStartE2EDuration="29.089407511s" podCreationTimestamp="2026-01-30 06:57:01 +0000 UTC" firstStartedPulling="2026-01-30 06:57:02.998440671 +0000 UTC m=+736.626792843" lastFinishedPulling="2026-01-30 06:57:27.777363922 +0000 UTC m=+761.405716113" observedRunningTime="2026-01-30 06:57:30.087776295 +0000 UTC m=+763.716128477" watchObservedRunningTime="2026-01-30 06:57:30.089407511 +0000 UTC m=+763.717759682" Jan 30 06:57:30 crc kubenswrapper[4520]: I0130 06:57:30.109487 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-6ksdm" podStartSLOduration=3.705136784 podStartE2EDuration="29.109479796s" podCreationTimestamp="2026-01-30 06:57:01 +0000 UTC" firstStartedPulling="2026-01-30 06:57:03.559697022 +0000 UTC m=+737.188049203" lastFinishedPulling="2026-01-30 06:57:28.964040033 +0000 UTC m=+762.592392215" observedRunningTime="2026-01-30 06:57:30.106743341 +0000 UTC m=+763.735095523" watchObservedRunningTime="2026-01-30 06:57:30.109479796 +0000 UTC m=+763.737831976" Jan 30 06:57:30 crc kubenswrapper[4520]: I0130 06:57:30.127978 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-sws5x" podStartSLOduration=8.393642692 podStartE2EDuration="29.127971407s" podCreationTimestamp="2026-01-30 06:57:01 +0000 UTC" firstStartedPulling="2026-01-30 06:57:03.057711825 +0000 UTC m=+736.686064005" lastFinishedPulling="2026-01-30 06:57:23.79204054 +0000 UTC m=+757.420392720" observedRunningTime="2026-01-30 06:57:30.126597284 +0000 UTC m=+763.754949465" watchObservedRunningTime="2026-01-30 06:57:30.127971407 +0000 UTC m=+763.756323578" Jan 30 06:57:30 crc kubenswrapper[4520]: I0130 06:57:30.160954 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-j2t49" podStartSLOduration=3.622077402 podStartE2EDuration="29.160933072s" podCreationTimestamp="2026-01-30 06:57:01 +0000 UTC" firstStartedPulling="2026-01-30 06:57:03.448619178 +0000 UTC m=+737.076971359" lastFinishedPulling="2026-01-30 06:57:28.987474858 +0000 UTC m=+762.615827029" observedRunningTime="2026-01-30 06:57:30.159437951 +0000 UTC m=+763.787790131" watchObservedRunningTime="2026-01-30 06:57:30.160933072 +0000 UTC m=+763.789285243" Jan 30 06:57:30 crc kubenswrapper[4520]: I0130 06:57:30.793716 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l28fb" event={"ID":"c2d1bf96-9105-4d5d-8dcd-174c098c76d9","Type":"ContainerStarted","Data":"d9ca7980747d709d766524254c63e3973c7313748072768e9ed76b6267105f56"} Jan 30 06:57:30 crc kubenswrapper[4520]: I0130 06:57:30.794702 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l28fb" Jan 30 06:57:30 crc kubenswrapper[4520]: I0130 06:57:30.798444 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-cxn6m" event={"ID":"c2f02050-fdee-42d1-87c0-74104b2aa6bc","Type":"ContainerStarted","Data":"cc34544141d3f1a74259f8017af7d677d56a12638bcaca648d184d5a349d5ea1"} Jan 30 06:57:30 crc kubenswrapper[4520]: I0130 06:57:30.798874 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-cxn6m" Jan 30 06:57:30 crc kubenswrapper[4520]: I0130 06:57:30.801836 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-w4cwl" event={"ID":"ff428ec2-c3cf-413e-ac23-8fe55a37d261","Type":"ContainerStarted","Data":"3235f40a052708e31a1ef1fac4dac0216e85a9e5f3470b05b3ceea63b49de712"} Jan 30 06:57:30 crc kubenswrapper[4520]: I0130 06:57:30.802177 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-w4cwl" Jan 30 06:57:30 crc kubenswrapper[4520]: I0130 06:57:30.835723 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l28fb" podStartSLOduration=2.904601596 podStartE2EDuration="29.835693237s" podCreationTimestamp="2026-01-30 06:57:01 +0000 UTC" firstStartedPulling="2026-01-30 06:57:03.380449727 +0000 UTC m=+737.008801908" lastFinishedPulling="2026-01-30 06:57:30.311541369 +0000 UTC m=+763.939893549" observedRunningTime="2026-01-30 06:57:30.834253661 +0000 UTC m=+764.462605842" watchObservedRunningTime="2026-01-30 06:57:30.835693237 +0000 UTC m=+764.464045418" Jan 30 06:57:30 crc kubenswrapper[4520]: I0130 06:57:30.882397 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-cxn6m" podStartSLOduration=29.882368672 podStartE2EDuration="29.882368672s" podCreationTimestamp="2026-01-30 06:57:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:57:30.879108763 +0000 UTC m=+764.507460944" watchObservedRunningTime="2026-01-30 06:57:30.882368672 +0000 UTC m=+764.510720853" Jan 30 06:57:30 crc kubenswrapper[4520]: I0130 06:57:30.910229 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-w4cwl" podStartSLOduration=2.109877612 podStartE2EDuration="29.910215782s" podCreationTimestamp="2026-01-30 06:57:01 +0000 UTC" firstStartedPulling="2026-01-30 06:57:02.511912402 +0000 UTC m=+736.140264583" lastFinishedPulling="2026-01-30 06:57:30.312250572 +0000 UTC m=+763.940602753" observedRunningTime="2026-01-30 06:57:30.904734256 +0000 UTC m=+764.533086437" watchObservedRunningTime="2026-01-30 06:57:30.910215782 +0000 UTC m=+764.538567963" Jan 30 06:57:33 crc kubenswrapper[4520]: I0130 06:57:33.423879 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cdf5fc79-647e-4d70-8785-682d7f27ce10-cert\") pod \"infra-operator-controller-manager-79955696d6-jfrp7\" (UID: \"cdf5fc79-647e-4d70-8785-682d7f27ce10\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-jfrp7" Jan 30 06:57:33 crc kubenswrapper[4520]: I0130 06:57:33.429731 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cdf5fc79-647e-4d70-8785-682d7f27ce10-cert\") pod \"infra-operator-controller-manager-79955696d6-jfrp7\" (UID: \"cdf5fc79-647e-4d70-8785-682d7f27ce10\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-jfrp7" Jan 30 06:57:33 crc kubenswrapper[4520]: I0130 06:57:33.510592 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-jfrp7" Jan 30 06:57:33 crc kubenswrapper[4520]: I0130 06:57:33.825298 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq" event={"ID":"3099544c-3b89-415c-aea6-f56b7581a803","Type":"ContainerStarted","Data":"1a5427a1e852178d25481da4dcc852e777497e9ce9e2cf5493e210c22dd698c5"} Jan 30 06:57:33 crc kubenswrapper[4520]: I0130 06:57:33.825448 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq" Jan 30 06:57:33 crc kubenswrapper[4520]: I0130 06:57:33.827593 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zr9jd" event={"ID":"6b8a65a3-bc6c-473d-892f-4d80011c854f","Type":"ContainerStarted","Data":"c1897ce707deb2712b68d7d141f26d935e1d41249ec66a0d78851a62f3ded437"} Jan 30 06:57:33 crc kubenswrapper[4520]: I0130 06:57:33.870812 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq" podStartSLOduration=29.601980272 podStartE2EDuration="32.870789297s" podCreationTimestamp="2026-01-30 06:57:01 +0000 UTC" firstStartedPulling="2026-01-30 06:57:29.512635022 +0000 UTC m=+763.140987203" lastFinishedPulling="2026-01-30 06:57:32.781444047 +0000 UTC m=+766.409796228" observedRunningTime="2026-01-30 06:57:33.850998542 +0000 UTC m=+767.479350733" watchObservedRunningTime="2026-01-30 06:57:33.870789297 +0000 UTC m=+767.499141468" Jan 30 06:57:33 crc kubenswrapper[4520]: I0130 06:57:33.872815 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zr9jd" podStartSLOduration=2.1966507650000002 podStartE2EDuration="31.872804765s" podCreationTimestamp="2026-01-30 06:57:02 +0000 UTC" firstStartedPulling="2026-01-30 06:57:03.557478421 +0000 UTC m=+737.185830592" lastFinishedPulling="2026-01-30 06:57:33.233632421 +0000 UTC m=+766.861984592" observedRunningTime="2026-01-30 06:57:33.86606746 +0000 UTC m=+767.494419641" watchObservedRunningTime="2026-01-30 06:57:33.872804765 +0000 UTC m=+767.501156936" Jan 30 06:57:33 crc kubenswrapper[4520]: I0130 06:57:33.920557 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-jfrp7"] Jan 30 06:57:34 crc kubenswrapper[4520]: I0130 06:57:34.840741 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-6hlhp" event={"ID":"fbf504d7-8829-43eb-983a-e7be0f5929ac","Type":"ContainerStarted","Data":"e8b35c8f12340ae5305207c091add52e36ad2f8c9ff548f57ee1c1e58fec0e19"} Jan 30 06:57:34 crc kubenswrapper[4520]: I0130 06:57:34.841393 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-6hlhp" Jan 30 06:57:34 crc kubenswrapper[4520]: I0130 06:57:34.848244 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-jfrp7" event={"ID":"cdf5fc79-647e-4d70-8785-682d7f27ce10","Type":"ContainerStarted","Data":"6516e23d35ec09eea08b066f5d1bb44c7d6f590b377f5490d15838a9b3ed5f94"} Jan 30 06:57:34 crc kubenswrapper[4520]: I0130 06:57:34.865288 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-6hlhp" podStartSLOduration=2.239335758 podStartE2EDuration="33.865259241s" podCreationTimestamp="2026-01-30 06:57:01 +0000 UTC" firstStartedPulling="2026-01-30 06:57:02.632615491 +0000 UTC m=+736.260967671" lastFinishedPulling="2026-01-30 06:57:34.258538974 +0000 UTC m=+767.886891154" observedRunningTime="2026-01-30 06:57:34.864775433 +0000 UTC m=+768.493127613" watchObservedRunningTime="2026-01-30 06:57:34.865259241 +0000 UTC m=+768.493611423" Jan 30 06:57:36 crc kubenswrapper[4520]: I0130 06:57:36.867421 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-jfrp7" event={"ID":"cdf5fc79-647e-4d70-8785-682d7f27ce10","Type":"ContainerStarted","Data":"776ade6d0ff4bb762adb8f1a9e06d06bdae3f0a910c40b5b8c59b9e5e78b6c5d"} Jan 30 06:57:36 crc kubenswrapper[4520]: I0130 06:57:36.867893 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-jfrp7" Jan 30 06:57:37 crc kubenswrapper[4520]: I0130 06:57:37.706680 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-jfrp7" podStartSLOduration=34.552355727 podStartE2EDuration="36.706642862s" podCreationTimestamp="2026-01-30 06:57:01 +0000 UTC" firstStartedPulling="2026-01-30 06:57:33.929165829 +0000 UTC m=+767.557517999" lastFinishedPulling="2026-01-30 06:57:36.083452953 +0000 UTC m=+769.711805134" observedRunningTime="2026-01-30 06:57:36.89117418 +0000 UTC m=+770.519526361" watchObservedRunningTime="2026-01-30 06:57:37.706642862 +0000 UTC m=+771.334995043" Jan 30 06:57:37 crc kubenswrapper[4520]: I0130 06:57:37.877423 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq" Jan 30 06:57:37 crc kubenswrapper[4520]: I0130 06:57:37.877476 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-pmrw9" event={"ID":"4e1b6bdd-a23f-4023-861a-e28c2dd5e640","Type":"ContainerStarted","Data":"4c205cb50f46ee73ed01d898bae15eb02d729888075291c419f7be5ca1abef62"} Jan 30 06:57:37 crc kubenswrapper[4520]: I0130 06:57:37.934452 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-pmrw9" podStartSLOduration=3.191103016 podStartE2EDuration="36.934435768s" podCreationTimestamp="2026-01-30 06:57:01 +0000 UTC" firstStartedPulling="2026-01-30 06:57:03.446424311 +0000 UTC m=+737.074776492" lastFinishedPulling="2026-01-30 06:57:37.189757064 +0000 UTC m=+770.818109244" observedRunningTime="2026-01-30 06:57:37.929350418 +0000 UTC m=+771.557702599" watchObservedRunningTime="2026-01-30 06:57:37.934435768 +0000 UTC m=+771.562787939" Jan 30 06:57:38 crc kubenswrapper[4520]: I0130 06:57:38.451861 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-cxn6m" Jan 30 06:57:38 crc kubenswrapper[4520]: I0130 06:57:38.884693 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-52h27" event={"ID":"cd83993b-94e2-438b-9f19-8179f70b4a0e","Type":"ContainerStarted","Data":"47e56c44db44b89c2df4d554b5da1f02d1c6b12749aa8a5fe341d1d9e6ac04b4"} Jan 30 06:57:38 crc kubenswrapper[4520]: I0130 06:57:38.884915 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-52h27" Jan 30 06:57:38 crc kubenswrapper[4520]: I0130 06:57:38.886933 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-9sjbr" event={"ID":"255320d6-1503-4351-ad06-7794cbbdd120","Type":"ContainerStarted","Data":"f51d9ef34ceb914fcde7870aa16863b394c12224e4de6115bba029bf05221aca"} Jan 30 06:57:38 crc kubenswrapper[4520]: I0130 06:57:38.887168 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-9sjbr" Jan 30 06:57:38 crc kubenswrapper[4520]: I0130 06:57:38.902823 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-52h27" podStartSLOduration=2.767591211 podStartE2EDuration="37.902803985s" podCreationTimestamp="2026-01-30 06:57:01 +0000 UTC" firstStartedPulling="2026-01-30 06:57:03.045701888 +0000 UTC m=+736.674054068" lastFinishedPulling="2026-01-30 06:57:38.180914661 +0000 UTC m=+771.809266842" observedRunningTime="2026-01-30 06:57:38.899827099 +0000 UTC m=+772.528179280" watchObservedRunningTime="2026-01-30 06:57:38.902803985 +0000 UTC m=+772.531156166" Jan 30 06:57:38 crc kubenswrapper[4520]: I0130 06:57:38.913982 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-9sjbr" podStartSLOduration=2.655407247 podStartE2EDuration="37.913967081s" podCreationTimestamp="2026-01-30 06:57:01 +0000 UTC" firstStartedPulling="2026-01-30 06:57:02.96269328 +0000 UTC m=+736.591045451" lastFinishedPulling="2026-01-30 06:57:38.221253103 +0000 UTC m=+771.849605285" observedRunningTime="2026-01-30 06:57:38.912834811 +0000 UTC m=+772.541186992" watchObservedRunningTime="2026-01-30 06:57:38.913967081 +0000 UTC m=+772.542319262" Jan 30 06:57:41 crc kubenswrapper[4520]: I0130 06:57:41.527943 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-df2r7" Jan 30 06:57:41 crc kubenswrapper[4520]: I0130 06:57:41.539190 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-qklbf" Jan 30 06:57:41 crc kubenswrapper[4520]: I0130 06:57:41.564889 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-w4cwl" Jan 30 06:57:41 crc kubenswrapper[4520]: I0130 06:57:41.596411 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gms89" Jan 30 06:57:41 crc kubenswrapper[4520]: I0130 06:57:41.643378 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-6hlhp" Jan 30 06:57:41 crc kubenswrapper[4520]: I0130 06:57:41.734616 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-kf987" Jan 30 06:57:41 crc kubenswrapper[4520]: I0130 06:57:41.757626 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-gfpw2" Jan 30 06:57:41 crc kubenswrapper[4520]: I0130 06:57:41.889804 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-sws5x" Jan 30 06:57:41 crc kubenswrapper[4520]: I0130 06:57:41.908376 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-cnhtx" event={"ID":"b7d9e5dd-5b3b-4aaa-834c-74029a7de138","Type":"ContainerStarted","Data":"2a3e180b3695c4ff43ff499f5a6b05b0681cdd95b2134448b5e93082c0d5bd51"} Jan 30 06:57:41 crc kubenswrapper[4520]: I0130 06:57:41.908616 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l28fb" Jan 30 06:57:41 crc kubenswrapper[4520]: I0130 06:57:41.908979 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-cnhtx" Jan 30 06:57:41 crc kubenswrapper[4520]: I0130 06:57:41.941164 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-cnhtx" podStartSLOduration=2.679446697 podStartE2EDuration="40.941146206s" podCreationTimestamp="2026-01-30 06:57:01 +0000 UTC" firstStartedPulling="2026-01-30 06:57:02.962628929 +0000 UTC m=+736.590981110" lastFinishedPulling="2026-01-30 06:57:41.224328437 +0000 UTC m=+774.852680619" observedRunningTime="2026-01-30 06:57:41.938175051 +0000 UTC m=+775.566527232" watchObservedRunningTime="2026-01-30 06:57:41.941146206 +0000 UTC m=+775.569498388" Jan 30 06:57:41 crc kubenswrapper[4520]: I0130 06:57:41.960967 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-kh9zj" Jan 30 06:57:42 crc kubenswrapper[4520]: I0130 06:57:42.026619 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2t4v6" Jan 30 06:57:42 crc kubenswrapper[4520]: I0130 06:57:42.027415 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-j2t49" Jan 30 06:57:42 crc kubenswrapper[4520]: I0130 06:57:42.041479 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rm676" Jan 30 06:57:42 crc kubenswrapper[4520]: I0130 06:57:42.059957 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-pmrw9" Jan 30 06:57:42 crc kubenswrapper[4520]: I0130 06:57:42.064997 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-pmrw9" Jan 30 06:57:42 crc kubenswrapper[4520]: I0130 06:57:42.129541 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2ppq4" Jan 30 06:57:42 crc kubenswrapper[4520]: I0130 06:57:42.481996 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-6ksdm" Jan 30 06:57:43 crc kubenswrapper[4520]: I0130 06:57:43.515470 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-jfrp7" Jan 30 06:57:51 crc kubenswrapper[4520]: I0130 06:57:51.789375 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-cnhtx" Jan 30 06:57:51 crc kubenswrapper[4520]: I0130 06:57:51.861159 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-52h27" Jan 30 06:57:51 crc kubenswrapper[4520]: I0130 06:57:51.924859 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-9sjbr" Jan 30 06:57:57 crc kubenswrapper[4520]: I0130 06:57:57.793656 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:57:57 crc kubenswrapper[4520]: I0130 06:57:57.794276 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:58:05 crc kubenswrapper[4520]: I0130 06:58:05.249877 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d969c4c77-6zmdl"] Jan 30 06:58:05 crc kubenswrapper[4520]: I0130 06:58:05.260856 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d969c4c77-6zmdl" Jan 30 06:58:05 crc kubenswrapper[4520]: I0130 06:58:05.268227 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 30 06:58:05 crc kubenswrapper[4520]: I0130 06:58:05.268480 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-42bhb" Jan 30 06:58:05 crc kubenswrapper[4520]: I0130 06:58:05.268688 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 30 06:58:05 crc kubenswrapper[4520]: I0130 06:58:05.268902 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 30 06:58:05 crc kubenswrapper[4520]: I0130 06:58:05.297336 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d969c4c77-6zmdl"] Jan 30 06:58:05 crc kubenswrapper[4520]: I0130 06:58:05.321632 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8d48b55b9-47phj"] Jan 30 06:58:05 crc kubenswrapper[4520]: I0130 06:58:05.322910 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8d48b55b9-47phj" Jan 30 06:58:05 crc kubenswrapper[4520]: I0130 06:58:05.326442 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8d48b55b9-47phj"] Jan 30 06:58:05 crc kubenswrapper[4520]: I0130 06:58:05.326833 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 30 06:58:05 crc kubenswrapper[4520]: I0130 06:58:05.396598 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abb25131-80ba-42be-8e62-607dc6e04636-dns-svc\") pod \"dnsmasq-dns-8d48b55b9-47phj\" (UID: \"abb25131-80ba-42be-8e62-607dc6e04636\") " pod="openstack/dnsmasq-dns-8d48b55b9-47phj" Jan 30 06:58:05 crc kubenswrapper[4520]: I0130 06:58:05.396673 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmp9s\" (UniqueName: \"kubernetes.io/projected/22de3ada-f978-40a4-a074-4a9d8730ce60-kube-api-access-mmp9s\") pod \"dnsmasq-dns-d969c4c77-6zmdl\" (UID: \"22de3ada-f978-40a4-a074-4a9d8730ce60\") " pod="openstack/dnsmasq-dns-d969c4c77-6zmdl" Jan 30 06:58:05 crc kubenswrapper[4520]: I0130 06:58:05.396732 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abb25131-80ba-42be-8e62-607dc6e04636-config\") pod \"dnsmasq-dns-8d48b55b9-47phj\" (UID: \"abb25131-80ba-42be-8e62-607dc6e04636\") " pod="openstack/dnsmasq-dns-8d48b55b9-47phj" Jan 30 06:58:05 crc kubenswrapper[4520]: I0130 06:58:05.396809 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22de3ada-f978-40a4-a074-4a9d8730ce60-config\") pod \"dnsmasq-dns-d969c4c77-6zmdl\" (UID: \"22de3ada-f978-40a4-a074-4a9d8730ce60\") " pod="openstack/dnsmasq-dns-d969c4c77-6zmdl" Jan 30 06:58:05 crc kubenswrapper[4520]: I0130 06:58:05.396842 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfxg5\" (UniqueName: \"kubernetes.io/projected/abb25131-80ba-42be-8e62-607dc6e04636-kube-api-access-dfxg5\") pod \"dnsmasq-dns-8d48b55b9-47phj\" (UID: \"abb25131-80ba-42be-8e62-607dc6e04636\") " pod="openstack/dnsmasq-dns-8d48b55b9-47phj" Jan 30 06:58:05 crc kubenswrapper[4520]: I0130 06:58:05.498261 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abb25131-80ba-42be-8e62-607dc6e04636-config\") pod \"dnsmasq-dns-8d48b55b9-47phj\" (UID: \"abb25131-80ba-42be-8e62-607dc6e04636\") " pod="openstack/dnsmasq-dns-8d48b55b9-47phj" Jan 30 06:58:05 crc kubenswrapper[4520]: I0130 06:58:05.498344 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22de3ada-f978-40a4-a074-4a9d8730ce60-config\") pod \"dnsmasq-dns-d969c4c77-6zmdl\" (UID: \"22de3ada-f978-40a4-a074-4a9d8730ce60\") " pod="openstack/dnsmasq-dns-d969c4c77-6zmdl" Jan 30 06:58:05 crc kubenswrapper[4520]: I0130 06:58:05.498374 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfxg5\" (UniqueName: \"kubernetes.io/projected/abb25131-80ba-42be-8e62-607dc6e04636-kube-api-access-dfxg5\") pod \"dnsmasq-dns-8d48b55b9-47phj\" (UID: \"abb25131-80ba-42be-8e62-607dc6e04636\") " pod="openstack/dnsmasq-dns-8d48b55b9-47phj" Jan 30 06:58:05 crc kubenswrapper[4520]: I0130 06:58:05.498452 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abb25131-80ba-42be-8e62-607dc6e04636-dns-svc\") pod \"dnsmasq-dns-8d48b55b9-47phj\" (UID: \"abb25131-80ba-42be-8e62-607dc6e04636\") " pod="openstack/dnsmasq-dns-8d48b55b9-47phj" Jan 30 06:58:05 crc kubenswrapper[4520]: I0130 06:58:05.498493 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmp9s\" (UniqueName: \"kubernetes.io/projected/22de3ada-f978-40a4-a074-4a9d8730ce60-kube-api-access-mmp9s\") pod \"dnsmasq-dns-d969c4c77-6zmdl\" (UID: \"22de3ada-f978-40a4-a074-4a9d8730ce60\") " pod="openstack/dnsmasq-dns-d969c4c77-6zmdl" Jan 30 06:58:05 crc kubenswrapper[4520]: I0130 06:58:05.499489 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22de3ada-f978-40a4-a074-4a9d8730ce60-config\") pod \"dnsmasq-dns-d969c4c77-6zmdl\" (UID: \"22de3ada-f978-40a4-a074-4a9d8730ce60\") " pod="openstack/dnsmasq-dns-d969c4c77-6zmdl" Jan 30 06:58:05 crc kubenswrapper[4520]: I0130 06:58:05.499509 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abb25131-80ba-42be-8e62-607dc6e04636-dns-svc\") pod \"dnsmasq-dns-8d48b55b9-47phj\" (UID: \"abb25131-80ba-42be-8e62-607dc6e04636\") " pod="openstack/dnsmasq-dns-8d48b55b9-47phj" Jan 30 06:58:05 crc kubenswrapper[4520]: I0130 06:58:05.500152 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abb25131-80ba-42be-8e62-607dc6e04636-config\") pod \"dnsmasq-dns-8d48b55b9-47phj\" (UID: \"abb25131-80ba-42be-8e62-607dc6e04636\") " pod="openstack/dnsmasq-dns-8d48b55b9-47phj" Jan 30 06:58:05 crc kubenswrapper[4520]: I0130 06:58:05.518219 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmp9s\" (UniqueName: \"kubernetes.io/projected/22de3ada-f978-40a4-a074-4a9d8730ce60-kube-api-access-mmp9s\") pod \"dnsmasq-dns-d969c4c77-6zmdl\" (UID: \"22de3ada-f978-40a4-a074-4a9d8730ce60\") " pod="openstack/dnsmasq-dns-d969c4c77-6zmdl" Jan 30 06:58:05 crc kubenswrapper[4520]: I0130 06:58:05.519585 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfxg5\" (UniqueName: \"kubernetes.io/projected/abb25131-80ba-42be-8e62-607dc6e04636-kube-api-access-dfxg5\") pod \"dnsmasq-dns-8d48b55b9-47phj\" (UID: \"abb25131-80ba-42be-8e62-607dc6e04636\") " pod="openstack/dnsmasq-dns-8d48b55b9-47phj" Jan 30 06:58:05 crc kubenswrapper[4520]: I0130 06:58:05.583813 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d969c4c77-6zmdl" Jan 30 06:58:05 crc kubenswrapper[4520]: I0130 06:58:05.647831 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8d48b55b9-47phj" Jan 30 06:58:06 crc kubenswrapper[4520]: I0130 06:58:06.016423 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d969c4c77-6zmdl"] Jan 30 06:58:06 crc kubenswrapper[4520]: I0130 06:58:06.025124 4520 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 06:58:06 crc kubenswrapper[4520]: I0130 06:58:06.091590 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8d48b55b9-47phj"] Jan 30 06:58:06 crc kubenswrapper[4520]: I0130 06:58:06.103226 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8d48b55b9-47phj" event={"ID":"abb25131-80ba-42be-8e62-607dc6e04636","Type":"ContainerStarted","Data":"17354343caccc7dbb96fa38aa2aa6395df6c71b8315465fb8de11eda481cb946"} Jan 30 06:58:06 crc kubenswrapper[4520]: I0130 06:58:06.104453 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d969c4c77-6zmdl" event={"ID":"22de3ada-f978-40a4-a074-4a9d8730ce60","Type":"ContainerStarted","Data":"444285496adce14cab5130b185febb5a2b83ef1011622ea4210313aa7c499834"} Jan 30 06:58:08 crc kubenswrapper[4520]: I0130 06:58:08.190939 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8d48b55b9-47phj"] Jan 30 06:58:08 crc kubenswrapper[4520]: I0130 06:58:08.210145 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-699648c449-ndkfx"] Jan 30 06:58:08 crc kubenswrapper[4520]: I0130 06:58:08.211284 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699648c449-ndkfx" Jan 30 06:58:08 crc kubenswrapper[4520]: I0130 06:58:08.226823 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-699648c449-ndkfx"] Jan 30 06:58:08 crc kubenswrapper[4520]: I0130 06:58:08.342483 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/947f0d01-6b92-4f3d-bb96-a1edf03651f1-dns-svc\") pod \"dnsmasq-dns-699648c449-ndkfx\" (UID: \"947f0d01-6b92-4f3d-bb96-a1edf03651f1\") " pod="openstack/dnsmasq-dns-699648c449-ndkfx" Jan 30 06:58:08 crc kubenswrapper[4520]: I0130 06:58:08.342589 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjjgx\" (UniqueName: \"kubernetes.io/projected/947f0d01-6b92-4f3d-bb96-a1edf03651f1-kube-api-access-kjjgx\") pod \"dnsmasq-dns-699648c449-ndkfx\" (UID: \"947f0d01-6b92-4f3d-bb96-a1edf03651f1\") " pod="openstack/dnsmasq-dns-699648c449-ndkfx" Jan 30 06:58:08 crc kubenswrapper[4520]: I0130 06:58:08.342614 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/947f0d01-6b92-4f3d-bb96-a1edf03651f1-config\") pod \"dnsmasq-dns-699648c449-ndkfx\" (UID: \"947f0d01-6b92-4f3d-bb96-a1edf03651f1\") " pod="openstack/dnsmasq-dns-699648c449-ndkfx" Jan 30 06:58:08 crc kubenswrapper[4520]: I0130 06:58:08.443860 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/947f0d01-6b92-4f3d-bb96-a1edf03651f1-dns-svc\") pod \"dnsmasq-dns-699648c449-ndkfx\" (UID: \"947f0d01-6b92-4f3d-bb96-a1edf03651f1\") " pod="openstack/dnsmasq-dns-699648c449-ndkfx" Jan 30 06:58:08 crc kubenswrapper[4520]: I0130 06:58:08.443925 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/947f0d01-6b92-4f3d-bb96-a1edf03651f1-config\") pod \"dnsmasq-dns-699648c449-ndkfx\" (UID: \"947f0d01-6b92-4f3d-bb96-a1edf03651f1\") " pod="openstack/dnsmasq-dns-699648c449-ndkfx" Jan 30 06:58:08 crc kubenswrapper[4520]: I0130 06:58:08.443943 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjjgx\" (UniqueName: \"kubernetes.io/projected/947f0d01-6b92-4f3d-bb96-a1edf03651f1-kube-api-access-kjjgx\") pod \"dnsmasq-dns-699648c449-ndkfx\" (UID: \"947f0d01-6b92-4f3d-bb96-a1edf03651f1\") " pod="openstack/dnsmasq-dns-699648c449-ndkfx" Jan 30 06:58:08 crc kubenswrapper[4520]: I0130 06:58:08.446297 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/947f0d01-6b92-4f3d-bb96-a1edf03651f1-dns-svc\") pod \"dnsmasq-dns-699648c449-ndkfx\" (UID: \"947f0d01-6b92-4f3d-bb96-a1edf03651f1\") " pod="openstack/dnsmasq-dns-699648c449-ndkfx" Jan 30 06:58:08 crc kubenswrapper[4520]: I0130 06:58:08.446364 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/947f0d01-6b92-4f3d-bb96-a1edf03651f1-config\") pod \"dnsmasq-dns-699648c449-ndkfx\" (UID: \"947f0d01-6b92-4f3d-bb96-a1edf03651f1\") " pod="openstack/dnsmasq-dns-699648c449-ndkfx" Jan 30 06:58:08 crc kubenswrapper[4520]: I0130 06:58:08.498317 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjjgx\" (UniqueName: \"kubernetes.io/projected/947f0d01-6b92-4f3d-bb96-a1edf03651f1-kube-api-access-kjjgx\") pod \"dnsmasq-dns-699648c449-ndkfx\" (UID: \"947f0d01-6b92-4f3d-bb96-a1edf03651f1\") " pod="openstack/dnsmasq-dns-699648c449-ndkfx" Jan 30 06:58:08 crc kubenswrapper[4520]: I0130 06:58:08.504421 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d969c4c77-6zmdl"] Jan 30 06:58:08 crc kubenswrapper[4520]: I0130 06:58:08.541262 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f744cb77-wjhmz"] Jan 30 06:58:08 crc kubenswrapper[4520]: I0130 06:58:08.542278 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f744cb77-wjhmz" Jan 30 06:58:08 crc kubenswrapper[4520]: I0130 06:58:08.561565 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699648c449-ndkfx" Jan 30 06:58:08 crc kubenswrapper[4520]: I0130 06:58:08.570465 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f744cb77-wjhmz"] Jan 30 06:58:08 crc kubenswrapper[4520]: I0130 06:58:08.657791 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8afe178-46ca-433c-8ce0-b0ab1fb61ffb-config\") pod \"dnsmasq-dns-7f744cb77-wjhmz\" (UID: \"d8afe178-46ca-433c-8ce0-b0ab1fb61ffb\") " pod="openstack/dnsmasq-dns-7f744cb77-wjhmz" Jan 30 06:58:08 crc kubenswrapper[4520]: I0130 06:58:08.658179 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8afe178-46ca-433c-8ce0-b0ab1fb61ffb-dns-svc\") pod \"dnsmasq-dns-7f744cb77-wjhmz\" (UID: \"d8afe178-46ca-433c-8ce0-b0ab1fb61ffb\") " pod="openstack/dnsmasq-dns-7f744cb77-wjhmz" Jan 30 06:58:08 crc kubenswrapper[4520]: I0130 06:58:08.662899 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdtt4\" (UniqueName: \"kubernetes.io/projected/d8afe178-46ca-433c-8ce0-b0ab1fb61ffb-kube-api-access-qdtt4\") pod \"dnsmasq-dns-7f744cb77-wjhmz\" (UID: \"d8afe178-46ca-433c-8ce0-b0ab1fb61ffb\") " pod="openstack/dnsmasq-dns-7f744cb77-wjhmz" Jan 30 06:58:08 crc kubenswrapper[4520]: I0130 06:58:08.764911 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8afe178-46ca-433c-8ce0-b0ab1fb61ffb-config\") pod \"dnsmasq-dns-7f744cb77-wjhmz\" (UID: \"d8afe178-46ca-433c-8ce0-b0ab1fb61ffb\") " pod="openstack/dnsmasq-dns-7f744cb77-wjhmz" Jan 30 06:58:08 crc kubenswrapper[4520]: I0130 06:58:08.765081 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8afe178-46ca-433c-8ce0-b0ab1fb61ffb-dns-svc\") pod \"dnsmasq-dns-7f744cb77-wjhmz\" (UID: \"d8afe178-46ca-433c-8ce0-b0ab1fb61ffb\") " pod="openstack/dnsmasq-dns-7f744cb77-wjhmz" Jan 30 06:58:08 crc kubenswrapper[4520]: I0130 06:58:08.765103 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdtt4\" (UniqueName: \"kubernetes.io/projected/d8afe178-46ca-433c-8ce0-b0ab1fb61ffb-kube-api-access-qdtt4\") pod \"dnsmasq-dns-7f744cb77-wjhmz\" (UID: \"d8afe178-46ca-433c-8ce0-b0ab1fb61ffb\") " pod="openstack/dnsmasq-dns-7f744cb77-wjhmz" Jan 30 06:58:08 crc kubenswrapper[4520]: I0130 06:58:08.766136 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8afe178-46ca-433c-8ce0-b0ab1fb61ffb-config\") pod \"dnsmasq-dns-7f744cb77-wjhmz\" (UID: \"d8afe178-46ca-433c-8ce0-b0ab1fb61ffb\") " pod="openstack/dnsmasq-dns-7f744cb77-wjhmz" Jan 30 06:58:08 crc kubenswrapper[4520]: I0130 06:58:08.766653 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8afe178-46ca-433c-8ce0-b0ab1fb61ffb-dns-svc\") pod \"dnsmasq-dns-7f744cb77-wjhmz\" (UID: \"d8afe178-46ca-433c-8ce0-b0ab1fb61ffb\") " pod="openstack/dnsmasq-dns-7f744cb77-wjhmz" Jan 30 06:58:08 crc kubenswrapper[4520]: I0130 06:58:08.788272 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdtt4\" (UniqueName: \"kubernetes.io/projected/d8afe178-46ca-433c-8ce0-b0ab1fb61ffb-kube-api-access-qdtt4\") pod \"dnsmasq-dns-7f744cb77-wjhmz\" (UID: \"d8afe178-46ca-433c-8ce0-b0ab1fb61ffb\") " pod="openstack/dnsmasq-dns-7f744cb77-wjhmz" Jan 30 06:58:08 crc kubenswrapper[4520]: I0130 06:58:08.871736 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f744cb77-wjhmz" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.140211 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-699648c449-ndkfx"] Jan 30 06:58:09 crc kubenswrapper[4520]: W0130 06:58:09.142914 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod947f0d01_6b92_4f3d_bb96_a1edf03651f1.slice/crio-a3e169a742f513f0651ae999a838f64d8941bd3f02f950124cb766b441123831 WatchSource:0}: Error finding container a3e169a742f513f0651ae999a838f64d8941bd3f02f950124cb766b441123831: Status 404 returned error can't find the container with id a3e169a742f513f0651ae999a838f64d8941bd3f02f950124cb766b441123831 Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.328458 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f744cb77-wjhmz"] Jan 30 06:58:09 crc kubenswrapper[4520]: W0130 06:58:09.338661 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8afe178_46ca_433c_8ce0_b0ab1fb61ffb.slice/crio-54e89ed464c549de060313b46986aff91d4aecfe06e379e240ca22620f24aea1 WatchSource:0}: Error finding container 54e89ed464c549de060313b46986aff91d4aecfe06e379e240ca22620f24aea1: Status 404 returned error can't find the container with id 54e89ed464c549de060313b46986aff91d4aecfe06e379e240ca22620f24aea1 Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.367289 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.369053 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.373829 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.374048 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.374894 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-ndz6j" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.375012 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.375121 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.376977 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.378963 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.383383 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.390084 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " pod="openstack/rabbitmq-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.390122 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " pod="openstack/rabbitmq-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.390162 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " pod="openstack/rabbitmq-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.390195 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " pod="openstack/rabbitmq-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.390213 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " pod="openstack/rabbitmq-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.390891 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lbhk\" (UniqueName: \"kubernetes.io/projected/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-kube-api-access-7lbhk\") pod \"rabbitmq-server-0\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " pod="openstack/rabbitmq-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.390973 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " pod="openstack/rabbitmq-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.391003 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " pod="openstack/rabbitmq-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.391020 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " pod="openstack/rabbitmq-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.391039 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " pod="openstack/rabbitmq-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.391068 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-config-data\") pod \"rabbitmq-server-0\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " pod="openstack/rabbitmq-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.500129 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lbhk\" (UniqueName: \"kubernetes.io/projected/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-kube-api-access-7lbhk\") pod \"rabbitmq-server-0\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " pod="openstack/rabbitmq-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.500220 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " pod="openstack/rabbitmq-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.500245 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " pod="openstack/rabbitmq-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.500270 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " pod="openstack/rabbitmq-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.500310 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " pod="openstack/rabbitmq-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.500355 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-config-data\") pod \"rabbitmq-server-0\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " pod="openstack/rabbitmq-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.500435 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " pod="openstack/rabbitmq-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.500466 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " pod="openstack/rabbitmq-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.500507 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " pod="openstack/rabbitmq-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.500598 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " pod="openstack/rabbitmq-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.500624 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " pod="openstack/rabbitmq-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.501850 4520 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.503556 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " pod="openstack/rabbitmq-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.504929 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " pod="openstack/rabbitmq-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.504161 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " pod="openstack/rabbitmq-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.504195 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-config-data\") pod \"rabbitmq-server-0\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " pod="openstack/rabbitmq-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.505364 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " pod="openstack/rabbitmq-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.507330 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " pod="openstack/rabbitmq-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.507797 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " pod="openstack/rabbitmq-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.508255 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " pod="openstack/rabbitmq-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.528307 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lbhk\" (UniqueName: \"kubernetes.io/projected/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-kube-api-access-7lbhk\") pod \"rabbitmq-server-0\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " pod="openstack/rabbitmq-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.542210 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " pod="openstack/rabbitmq-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.550321 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " pod="openstack/rabbitmq-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.693510 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.697943 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.699316 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.703504 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.704210 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.704350 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.704300 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-6kzwj" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.704599 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.704877 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.709228 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.714193 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.812038 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fc4abc0f-2827-4636-9942-342593697905-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.812128 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sjzl\" (UniqueName: \"kubernetes.io/projected/fc4abc0f-2827-4636-9942-342593697905-kube-api-access-7sjzl\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.812190 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fc4abc0f-2827-4636-9942-342593697905-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.812244 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fc4abc0f-2827-4636-9942-342593697905-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.812271 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fc4abc0f-2827-4636-9942-342593697905-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.812303 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fc4abc0f-2827-4636-9942-342593697905-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.812352 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fc4abc0f-2827-4636-9942-342593697905-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.812381 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.812398 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fc4abc0f-2827-4636-9942-342593697905-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.812422 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fc4abc0f-2827-4636-9942-342593697905-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.812480 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fc4abc0f-2827-4636-9942-342593697905-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.913524 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fc4abc0f-2827-4636-9942-342593697905-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.913751 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fc4abc0f-2827-4636-9942-342593697905-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.913778 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fc4abc0f-2827-4636-9942-342593697905-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.913809 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fc4abc0f-2827-4636-9942-342593697905-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.913858 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fc4abc0f-2827-4636-9942-342593697905-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.913887 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.913900 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fc4abc0f-2827-4636-9942-342593697905-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.913921 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fc4abc0f-2827-4636-9942-342593697905-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.913989 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fc4abc0f-2827-4636-9942-342593697905-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.914023 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fc4abc0f-2827-4636-9942-342593697905-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.914082 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sjzl\" (UniqueName: \"kubernetes.io/projected/fc4abc0f-2827-4636-9942-342593697905-kube-api-access-7sjzl\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.914832 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fc4abc0f-2827-4636-9942-342593697905-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.915401 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fc4abc0f-2827-4636-9942-342593697905-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.915708 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fc4abc0f-2827-4636-9942-342593697905-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.922757 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fc4abc0f-2827-4636-9942-342593697905-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.923650 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fc4abc0f-2827-4636-9942-342593697905-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.923952 4520 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.924426 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fc4abc0f-2827-4636-9942-342593697905-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.924955 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fc4abc0f-2827-4636-9942-342593697905-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.925679 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fc4abc0f-2827-4636-9942-342593697905-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.926195 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fc4abc0f-2827-4636-9942-342593697905-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.938123 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sjzl\" (UniqueName: \"kubernetes.io/projected/fc4abc0f-2827-4636-9942-342593697905-kube-api-access-7sjzl\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:58:09 crc kubenswrapper[4520]: I0130 06:58:09.942925 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:58:10 crc kubenswrapper[4520]: I0130 06:58:10.028245 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:58:10 crc kubenswrapper[4520]: I0130 06:58:10.218226 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f744cb77-wjhmz" event={"ID":"d8afe178-46ca-433c-8ce0-b0ab1fb61ffb","Type":"ContainerStarted","Data":"54e89ed464c549de060313b46986aff91d4aecfe06e379e240ca22620f24aea1"} Jan 30 06:58:10 crc kubenswrapper[4520]: I0130 06:58:10.219437 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 06:58:10 crc kubenswrapper[4520]: I0130 06:58:10.230452 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699648c449-ndkfx" event={"ID":"947f0d01-6b92-4f3d-bb96-a1edf03651f1","Type":"ContainerStarted","Data":"a3e169a742f513f0651ae999a838f64d8941bd3f02f950124cb766b441123831"} Jan 30 06:58:10 crc kubenswrapper[4520]: I0130 06:58:10.449671 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 06:58:10 crc kubenswrapper[4520]: I0130 06:58:10.788243 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 30 06:58:10 crc kubenswrapper[4520]: I0130 06:58:10.790017 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 06:58:10 crc kubenswrapper[4520]: I0130 06:58:10.792041 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 30 06:58:10 crc kubenswrapper[4520]: I0130 06:58:10.794298 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 30 06:58:10 crc kubenswrapper[4520]: I0130 06:58:10.794419 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-92d4t" Jan 30 06:58:10 crc kubenswrapper[4520]: I0130 06:58:10.794867 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 30 06:58:10 crc kubenswrapper[4520]: I0130 06:58:10.796887 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 06:58:10 crc kubenswrapper[4520]: I0130 06:58:10.810221 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 30 06:58:10 crc kubenswrapper[4520]: I0130 06:58:10.927570 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0f6edd3b-e0fe-4d2b-9e68-912425c0128e-config-data-default\") pod \"openstack-galera-0\" (UID: \"0f6edd3b-e0fe-4d2b-9e68-912425c0128e\") " pod="openstack/openstack-galera-0" Jan 30 06:58:10 crc kubenswrapper[4520]: I0130 06:58:10.927636 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f6edd3b-e0fe-4d2b-9e68-912425c0128e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0f6edd3b-e0fe-4d2b-9e68-912425c0128e\") " pod="openstack/openstack-galera-0" Jan 30 06:58:10 crc kubenswrapper[4520]: I0130 06:58:10.927671 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0f6edd3b-e0fe-4d2b-9e68-912425c0128e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0f6edd3b-e0fe-4d2b-9e68-912425c0128e\") " pod="openstack/openstack-galera-0" Jan 30 06:58:10 crc kubenswrapper[4520]: I0130 06:58:10.927713 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0f6edd3b-e0fe-4d2b-9e68-912425c0128e-kolla-config\") pod \"openstack-galera-0\" (UID: \"0f6edd3b-e0fe-4d2b-9e68-912425c0128e\") " pod="openstack/openstack-galera-0" Jan 30 06:58:10 crc kubenswrapper[4520]: I0130 06:58:10.927754 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p8s9\" (UniqueName: \"kubernetes.io/projected/0f6edd3b-e0fe-4d2b-9e68-912425c0128e-kube-api-access-5p8s9\") pod \"openstack-galera-0\" (UID: \"0f6edd3b-e0fe-4d2b-9e68-912425c0128e\") " pod="openstack/openstack-galera-0" Jan 30 06:58:10 crc kubenswrapper[4520]: I0130 06:58:10.927793 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"0f6edd3b-e0fe-4d2b-9e68-912425c0128e\") " pod="openstack/openstack-galera-0" Jan 30 06:58:10 crc kubenswrapper[4520]: I0130 06:58:10.927813 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f6edd3b-e0fe-4d2b-9e68-912425c0128e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0f6edd3b-e0fe-4d2b-9e68-912425c0128e\") " pod="openstack/openstack-galera-0" Jan 30 06:58:10 crc kubenswrapper[4520]: I0130 06:58:10.927872 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f6edd3b-e0fe-4d2b-9e68-912425c0128e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0f6edd3b-e0fe-4d2b-9e68-912425c0128e\") " pod="openstack/openstack-galera-0" Jan 30 06:58:11 crc kubenswrapper[4520]: I0130 06:58:11.029837 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"0f6edd3b-e0fe-4d2b-9e68-912425c0128e\") " pod="openstack/openstack-galera-0" Jan 30 06:58:11 crc kubenswrapper[4520]: I0130 06:58:11.029941 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f6edd3b-e0fe-4d2b-9e68-912425c0128e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0f6edd3b-e0fe-4d2b-9e68-912425c0128e\") " pod="openstack/openstack-galera-0" Jan 30 06:58:11 crc kubenswrapper[4520]: I0130 06:58:11.030068 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f6edd3b-e0fe-4d2b-9e68-912425c0128e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0f6edd3b-e0fe-4d2b-9e68-912425c0128e\") " pod="openstack/openstack-galera-0" Jan 30 06:58:11 crc kubenswrapper[4520]: I0130 06:58:11.030120 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0f6edd3b-e0fe-4d2b-9e68-912425c0128e-config-data-default\") pod \"openstack-galera-0\" (UID: \"0f6edd3b-e0fe-4d2b-9e68-912425c0128e\") " pod="openstack/openstack-galera-0" Jan 30 06:58:11 crc kubenswrapper[4520]: I0130 06:58:11.030160 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f6edd3b-e0fe-4d2b-9e68-912425c0128e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0f6edd3b-e0fe-4d2b-9e68-912425c0128e\") " pod="openstack/openstack-galera-0" Jan 30 06:58:11 crc kubenswrapper[4520]: I0130 06:58:11.030202 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0f6edd3b-e0fe-4d2b-9e68-912425c0128e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0f6edd3b-e0fe-4d2b-9e68-912425c0128e\") " pod="openstack/openstack-galera-0" Jan 30 06:58:11 crc kubenswrapper[4520]: I0130 06:58:11.030201 4520 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"0f6edd3b-e0fe-4d2b-9e68-912425c0128e\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/openstack-galera-0" Jan 30 06:58:11 crc kubenswrapper[4520]: I0130 06:58:11.030385 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0f6edd3b-e0fe-4d2b-9e68-912425c0128e-kolla-config\") pod \"openstack-galera-0\" (UID: \"0f6edd3b-e0fe-4d2b-9e68-912425c0128e\") " pod="openstack/openstack-galera-0" Jan 30 06:58:11 crc kubenswrapper[4520]: I0130 06:58:11.030574 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5p8s9\" (UniqueName: \"kubernetes.io/projected/0f6edd3b-e0fe-4d2b-9e68-912425c0128e-kube-api-access-5p8s9\") pod \"openstack-galera-0\" (UID: \"0f6edd3b-e0fe-4d2b-9e68-912425c0128e\") " pod="openstack/openstack-galera-0" Jan 30 06:58:11 crc kubenswrapper[4520]: I0130 06:58:11.030726 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0f6edd3b-e0fe-4d2b-9e68-912425c0128e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0f6edd3b-e0fe-4d2b-9e68-912425c0128e\") " pod="openstack/openstack-galera-0" Jan 30 06:58:11 crc kubenswrapper[4520]: I0130 06:58:11.031274 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0f6edd3b-e0fe-4d2b-9e68-912425c0128e-kolla-config\") pod \"openstack-galera-0\" (UID: \"0f6edd3b-e0fe-4d2b-9e68-912425c0128e\") " pod="openstack/openstack-galera-0" Jan 30 06:58:11 crc kubenswrapper[4520]: I0130 06:58:11.031938 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0f6edd3b-e0fe-4d2b-9e68-912425c0128e-config-data-default\") pod \"openstack-galera-0\" (UID: \"0f6edd3b-e0fe-4d2b-9e68-912425c0128e\") " pod="openstack/openstack-galera-0" Jan 30 06:58:11 crc kubenswrapper[4520]: I0130 06:58:11.032362 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f6edd3b-e0fe-4d2b-9e68-912425c0128e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0f6edd3b-e0fe-4d2b-9e68-912425c0128e\") " pod="openstack/openstack-galera-0" Jan 30 06:58:11 crc kubenswrapper[4520]: I0130 06:58:11.043661 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f6edd3b-e0fe-4d2b-9e68-912425c0128e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0f6edd3b-e0fe-4d2b-9e68-912425c0128e\") " pod="openstack/openstack-galera-0" Jan 30 06:58:11 crc kubenswrapper[4520]: I0130 06:58:11.051293 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p8s9\" (UniqueName: \"kubernetes.io/projected/0f6edd3b-e0fe-4d2b-9e68-912425c0128e-kube-api-access-5p8s9\") pod \"openstack-galera-0\" (UID: \"0f6edd3b-e0fe-4d2b-9e68-912425c0128e\") " pod="openstack/openstack-galera-0" Jan 30 06:58:11 crc kubenswrapper[4520]: I0130 06:58:11.064178 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f6edd3b-e0fe-4d2b-9e68-912425c0128e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0f6edd3b-e0fe-4d2b-9e68-912425c0128e\") " pod="openstack/openstack-galera-0" Jan 30 06:58:11 crc kubenswrapper[4520]: I0130 06:58:11.088329 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"0f6edd3b-e0fe-4d2b-9e68-912425c0128e\") " pod="openstack/openstack-galera-0" Jan 30 06:58:11 crc kubenswrapper[4520]: I0130 06:58:11.115437 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 06:58:11 crc kubenswrapper[4520]: I0130 06:58:11.240874 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184","Type":"ContainerStarted","Data":"02eda79311ba45f35adc42ede213147af0a559fdc676fafdf0875da6846faf29"} Jan 30 06:58:11 crc kubenswrapper[4520]: I0130 06:58:11.247000 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fc4abc0f-2827-4636-9942-342593697905","Type":"ContainerStarted","Data":"ba8de7795f3e191f6c65534006eb557c295f434582651b1d8f7277f7ef9b45be"} Jan 30 06:58:11 crc kubenswrapper[4520]: I0130 06:58:11.627375 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 06:58:11 crc kubenswrapper[4520]: W0130 06:58:11.654291 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f6edd3b_e0fe_4d2b_9e68_912425c0128e.slice/crio-7e877be19ea02c978f78c0e08deac982807c31f0f95fec91e97b50f5ac8931c4 WatchSource:0}: Error finding container 7e877be19ea02c978f78c0e08deac982807c31f0f95fec91e97b50f5ac8931c4: Status 404 returned error can't find the container with id 7e877be19ea02c978f78c0e08deac982807c31f0f95fec91e97b50f5ac8931c4 Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.296716 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0f6edd3b-e0fe-4d2b-9e68-912425c0128e","Type":"ContainerStarted","Data":"7e877be19ea02c978f78c0e08deac982807c31f0f95fec91e97b50f5ac8931c4"} Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.331710 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.333069 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.340133 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.340193 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.340563 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-68qjc" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.340698 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.342071 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.342825 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.349346 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.349738 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-sp9tc" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.349942 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.350073 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.355509 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.478096 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1c0133b5-7aa9-4181-b5e6-4eab077d801c-kolla-config\") pod \"memcached-0\" (UID: \"1c0133b5-7aa9-4181-b5e6-4eab077d801c\") " pod="openstack/memcached-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.478134 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm5cq\" (UniqueName: \"kubernetes.io/projected/4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa-kube-api-access-lm5cq\") pod \"openstack-cell1-galera-0\" (UID: \"4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.478161 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.478186 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.478363 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.478443 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.478469 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1c0133b5-7aa9-4181-b5e6-4eab077d801c-config-data\") pod \"memcached-0\" (UID: \"1c0133b5-7aa9-4181-b5e6-4eab077d801c\") " pod="openstack/memcached-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.478494 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.478760 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.478810 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.478841 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c0133b5-7aa9-4181-b5e6-4eab077d801c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"1c0133b5-7aa9-4181-b5e6-4eab077d801c\") " pod="openstack/memcached-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.478861 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c0133b5-7aa9-4181-b5e6-4eab077d801c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"1c0133b5-7aa9-4181-b5e6-4eab077d801c\") " pod="openstack/memcached-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.478941 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d544r\" (UniqueName: \"kubernetes.io/projected/1c0133b5-7aa9-4181-b5e6-4eab077d801c-kube-api-access-d544r\") pod \"memcached-0\" (UID: \"1c0133b5-7aa9-4181-b5e6-4eab077d801c\") " pod="openstack/memcached-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.583086 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d544r\" (UniqueName: \"kubernetes.io/projected/1c0133b5-7aa9-4181-b5e6-4eab077d801c-kube-api-access-d544r\") pod \"memcached-0\" (UID: \"1c0133b5-7aa9-4181-b5e6-4eab077d801c\") " pod="openstack/memcached-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.583331 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1c0133b5-7aa9-4181-b5e6-4eab077d801c-kolla-config\") pod \"memcached-0\" (UID: \"1c0133b5-7aa9-4181-b5e6-4eab077d801c\") " pod="openstack/memcached-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.583434 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm5cq\" (UniqueName: \"kubernetes.io/projected/4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa-kube-api-access-lm5cq\") pod \"openstack-cell1-galera-0\" (UID: \"4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.583465 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.583488 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.583605 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.583701 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.583728 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1c0133b5-7aa9-4181-b5e6-4eab077d801c-config-data\") pod \"memcached-0\" (UID: \"1c0133b5-7aa9-4181-b5e6-4eab077d801c\") " pod="openstack/memcached-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.583779 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.583891 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.584019 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.584092 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c0133b5-7aa9-4181-b5e6-4eab077d801c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"1c0133b5-7aa9-4181-b5e6-4eab077d801c\") " pod="openstack/memcached-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.584117 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c0133b5-7aa9-4181-b5e6-4eab077d801c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"1c0133b5-7aa9-4181-b5e6-4eab077d801c\") " pod="openstack/memcached-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.584411 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1c0133b5-7aa9-4181-b5e6-4eab077d801c-kolla-config\") pod \"memcached-0\" (UID: \"1c0133b5-7aa9-4181-b5e6-4eab077d801c\") " pod="openstack/memcached-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.584717 4520 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-cell1-galera-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.587542 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.588098 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.589508 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1c0133b5-7aa9-4181-b5e6-4eab077d801c-config-data\") pod \"memcached-0\" (UID: \"1c0133b5-7aa9-4181-b5e6-4eab077d801c\") " pod="openstack/memcached-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.591264 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.592404 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c0133b5-7aa9-4181-b5e6-4eab077d801c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"1c0133b5-7aa9-4181-b5e6-4eab077d801c\") " pod="openstack/memcached-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.594465 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.595888 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.601970 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm5cq\" (UniqueName: \"kubernetes.io/projected/4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa-kube-api-access-lm5cq\") pod \"openstack-cell1-galera-0\" (UID: \"4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.602653 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.602693 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d544r\" (UniqueName: \"kubernetes.io/projected/1c0133b5-7aa9-4181-b5e6-4eab077d801c-kube-api-access-d544r\") pod \"memcached-0\" (UID: \"1c0133b5-7aa9-4181-b5e6-4eab077d801c\") " pod="openstack/memcached-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.614197 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c0133b5-7aa9-4181-b5e6-4eab077d801c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"1c0133b5-7aa9-4181-b5e6-4eab077d801c\") " pod="openstack/memcached-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.631440 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.665343 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 06:58:12 crc kubenswrapper[4520]: I0130 06:58:12.676710 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 06:58:13 crc kubenswrapper[4520]: I0130 06:58:13.254907 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 06:58:13 crc kubenswrapper[4520]: I0130 06:58:13.306606 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa","Type":"ContainerStarted","Data":"d92f2c2b98d9e02bd497f0b69397bb2930343827e7a2985384883d44b3d03b5d"} Jan 30 06:58:13 crc kubenswrapper[4520]: I0130 06:58:13.349549 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 06:58:13 crc kubenswrapper[4520]: W0130 06:58:13.380760 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c0133b5_7aa9_4181_b5e6_4eab077d801c.slice/crio-48c279a51939c62b7dd5f94c876404ab64f50ede84aa7ae39516251d592c95c7 WatchSource:0}: Error finding container 48c279a51939c62b7dd5f94c876404ab64f50ede84aa7ae39516251d592c95c7: Status 404 returned error can't find the container with id 48c279a51939c62b7dd5f94c876404ab64f50ede84aa7ae39516251d592c95c7 Jan 30 06:58:14 crc kubenswrapper[4520]: I0130 06:58:14.335618 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 06:58:14 crc kubenswrapper[4520]: I0130 06:58:14.336983 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 06:58:14 crc kubenswrapper[4520]: I0130 06:58:14.340094 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-gclkf" Jan 30 06:58:14 crc kubenswrapper[4520]: I0130 06:58:14.356825 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"1c0133b5-7aa9-4181-b5e6-4eab077d801c","Type":"ContainerStarted","Data":"48c279a51939c62b7dd5f94c876404ab64f50ede84aa7ae39516251d592c95c7"} Jan 30 06:58:14 crc kubenswrapper[4520]: I0130 06:58:14.360789 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 06:58:14 crc kubenswrapper[4520]: I0130 06:58:14.429015 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdjmn\" (UniqueName: \"kubernetes.io/projected/808069be-f7b4-4e1c-86d2-585915e49a1f-kube-api-access-xdjmn\") pod \"kube-state-metrics-0\" (UID: \"808069be-f7b4-4e1c-86d2-585915e49a1f\") " pod="openstack/kube-state-metrics-0" Jan 30 06:58:14 crc kubenswrapper[4520]: I0130 06:58:14.530601 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdjmn\" (UniqueName: \"kubernetes.io/projected/808069be-f7b4-4e1c-86d2-585915e49a1f-kube-api-access-xdjmn\") pod \"kube-state-metrics-0\" (UID: \"808069be-f7b4-4e1c-86d2-585915e49a1f\") " pod="openstack/kube-state-metrics-0" Jan 30 06:58:14 crc kubenswrapper[4520]: I0130 06:58:14.553559 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdjmn\" (UniqueName: \"kubernetes.io/projected/808069be-f7b4-4e1c-86d2-585915e49a1f-kube-api-access-xdjmn\") pod \"kube-state-metrics-0\" (UID: \"808069be-f7b4-4e1c-86d2-585915e49a1f\") " pod="openstack/kube-state-metrics-0" Jan 30 06:58:14 crc kubenswrapper[4520]: I0130 06:58:14.676502 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.268640 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-xrbrq"] Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.276858 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xrbrq" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.282535 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-xkmvw"] Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.283978 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-xkmvw" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.286177 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-wj779" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.286431 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.288029 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.299011 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xrbrq"] Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.314690 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-xkmvw"] Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.399997 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2de41a82-557b-4bac-8666-c0ebf0ce4676-var-log\") pod \"ovn-controller-ovs-xkmvw\" (UID: \"2de41a82-557b-4bac-8666-c0ebf0ce4676\") " pod="openstack/ovn-controller-ovs-xkmvw" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.400045 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f432695-8546-408b-a2f3-5c5df41a81cf-ovn-controller-tls-certs\") pod \"ovn-controller-xrbrq\" (UID: \"1f432695-8546-408b-a2f3-5c5df41a81cf\") " pod="openstack/ovn-controller-xrbrq" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.400067 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2de41a82-557b-4bac-8666-c0ebf0ce4676-etc-ovs\") pod \"ovn-controller-ovs-xkmvw\" (UID: \"2de41a82-557b-4bac-8666-c0ebf0ce4676\") " pod="openstack/ovn-controller-ovs-xkmvw" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.400084 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxzg7\" (UniqueName: \"kubernetes.io/projected/1f432695-8546-408b-a2f3-5c5df41a81cf-kube-api-access-vxzg7\") pod \"ovn-controller-xrbrq\" (UID: \"1f432695-8546-408b-a2f3-5c5df41a81cf\") " pod="openstack/ovn-controller-xrbrq" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.400109 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txndd\" (UniqueName: \"kubernetes.io/projected/2de41a82-557b-4bac-8666-c0ebf0ce4676-kube-api-access-txndd\") pod \"ovn-controller-ovs-xkmvw\" (UID: \"2de41a82-557b-4bac-8666-c0ebf0ce4676\") " pod="openstack/ovn-controller-ovs-xkmvw" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.400136 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1f432695-8546-408b-a2f3-5c5df41a81cf-var-run-ovn\") pod \"ovn-controller-xrbrq\" (UID: \"1f432695-8546-408b-a2f3-5c5df41a81cf\") " pod="openstack/ovn-controller-xrbrq" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.400161 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2de41a82-557b-4bac-8666-c0ebf0ce4676-scripts\") pod \"ovn-controller-ovs-xkmvw\" (UID: \"2de41a82-557b-4bac-8666-c0ebf0ce4676\") " pod="openstack/ovn-controller-ovs-xkmvw" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.400182 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f432695-8546-408b-a2f3-5c5df41a81cf-combined-ca-bundle\") pod \"ovn-controller-xrbrq\" (UID: \"1f432695-8546-408b-a2f3-5c5df41a81cf\") " pod="openstack/ovn-controller-xrbrq" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.400212 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1f432695-8546-408b-a2f3-5c5df41a81cf-var-log-ovn\") pod \"ovn-controller-xrbrq\" (UID: \"1f432695-8546-408b-a2f3-5c5df41a81cf\") " pod="openstack/ovn-controller-xrbrq" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.400229 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1f432695-8546-408b-a2f3-5c5df41a81cf-var-run\") pod \"ovn-controller-xrbrq\" (UID: \"1f432695-8546-408b-a2f3-5c5df41a81cf\") " pod="openstack/ovn-controller-xrbrq" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.400242 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2de41a82-557b-4bac-8666-c0ebf0ce4676-var-run\") pod \"ovn-controller-ovs-xkmvw\" (UID: \"2de41a82-557b-4bac-8666-c0ebf0ce4676\") " pod="openstack/ovn-controller-ovs-xkmvw" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.400262 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2de41a82-557b-4bac-8666-c0ebf0ce4676-var-lib\") pod \"ovn-controller-ovs-xkmvw\" (UID: \"2de41a82-557b-4bac-8666-c0ebf0ce4676\") " pod="openstack/ovn-controller-ovs-xkmvw" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.400280 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f432695-8546-408b-a2f3-5c5df41a81cf-scripts\") pod \"ovn-controller-xrbrq\" (UID: \"1f432695-8546-408b-a2f3-5c5df41a81cf\") " pod="openstack/ovn-controller-xrbrq" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.502010 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1f432695-8546-408b-a2f3-5c5df41a81cf-var-log-ovn\") pod \"ovn-controller-xrbrq\" (UID: \"1f432695-8546-408b-a2f3-5c5df41a81cf\") " pod="openstack/ovn-controller-xrbrq" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.502056 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1f432695-8546-408b-a2f3-5c5df41a81cf-var-run\") pod \"ovn-controller-xrbrq\" (UID: \"1f432695-8546-408b-a2f3-5c5df41a81cf\") " pod="openstack/ovn-controller-xrbrq" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.502115 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2de41a82-557b-4bac-8666-c0ebf0ce4676-var-run\") pod \"ovn-controller-ovs-xkmvw\" (UID: \"2de41a82-557b-4bac-8666-c0ebf0ce4676\") " pod="openstack/ovn-controller-ovs-xkmvw" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.502139 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2de41a82-557b-4bac-8666-c0ebf0ce4676-var-lib\") pod \"ovn-controller-ovs-xkmvw\" (UID: \"2de41a82-557b-4bac-8666-c0ebf0ce4676\") " pod="openstack/ovn-controller-ovs-xkmvw" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.502179 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f432695-8546-408b-a2f3-5c5df41a81cf-scripts\") pod \"ovn-controller-xrbrq\" (UID: \"1f432695-8546-408b-a2f3-5c5df41a81cf\") " pod="openstack/ovn-controller-xrbrq" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.502203 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2de41a82-557b-4bac-8666-c0ebf0ce4676-var-log\") pod \"ovn-controller-ovs-xkmvw\" (UID: \"2de41a82-557b-4bac-8666-c0ebf0ce4676\") " pod="openstack/ovn-controller-ovs-xkmvw" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.502220 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f432695-8546-408b-a2f3-5c5df41a81cf-ovn-controller-tls-certs\") pod \"ovn-controller-xrbrq\" (UID: \"1f432695-8546-408b-a2f3-5c5df41a81cf\") " pod="openstack/ovn-controller-xrbrq" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.502256 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2de41a82-557b-4bac-8666-c0ebf0ce4676-etc-ovs\") pod \"ovn-controller-ovs-xkmvw\" (UID: \"2de41a82-557b-4bac-8666-c0ebf0ce4676\") " pod="openstack/ovn-controller-ovs-xkmvw" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.502273 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxzg7\" (UniqueName: \"kubernetes.io/projected/1f432695-8546-408b-a2f3-5c5df41a81cf-kube-api-access-vxzg7\") pod \"ovn-controller-xrbrq\" (UID: \"1f432695-8546-408b-a2f3-5c5df41a81cf\") " pod="openstack/ovn-controller-xrbrq" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.502300 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txndd\" (UniqueName: \"kubernetes.io/projected/2de41a82-557b-4bac-8666-c0ebf0ce4676-kube-api-access-txndd\") pod \"ovn-controller-ovs-xkmvw\" (UID: \"2de41a82-557b-4bac-8666-c0ebf0ce4676\") " pod="openstack/ovn-controller-ovs-xkmvw" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.502343 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1f432695-8546-408b-a2f3-5c5df41a81cf-var-run-ovn\") pod \"ovn-controller-xrbrq\" (UID: \"1f432695-8546-408b-a2f3-5c5df41a81cf\") " pod="openstack/ovn-controller-xrbrq" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.502370 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2de41a82-557b-4bac-8666-c0ebf0ce4676-scripts\") pod \"ovn-controller-ovs-xkmvw\" (UID: \"2de41a82-557b-4bac-8666-c0ebf0ce4676\") " pod="openstack/ovn-controller-ovs-xkmvw" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.502402 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f432695-8546-408b-a2f3-5c5df41a81cf-combined-ca-bundle\") pod \"ovn-controller-xrbrq\" (UID: \"1f432695-8546-408b-a2f3-5c5df41a81cf\") " pod="openstack/ovn-controller-xrbrq" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.502609 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1f432695-8546-408b-a2f3-5c5df41a81cf-var-log-ovn\") pod \"ovn-controller-xrbrq\" (UID: \"1f432695-8546-408b-a2f3-5c5df41a81cf\") " pod="openstack/ovn-controller-xrbrq" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.503622 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2de41a82-557b-4bac-8666-c0ebf0ce4676-etc-ovs\") pod \"ovn-controller-ovs-xkmvw\" (UID: \"2de41a82-557b-4bac-8666-c0ebf0ce4676\") " pod="openstack/ovn-controller-ovs-xkmvw" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.503905 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2de41a82-557b-4bac-8666-c0ebf0ce4676-var-lib\") pod \"ovn-controller-ovs-xkmvw\" (UID: \"2de41a82-557b-4bac-8666-c0ebf0ce4676\") " pod="openstack/ovn-controller-ovs-xkmvw" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.505032 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1f432695-8546-408b-a2f3-5c5df41a81cf-var-run\") pod \"ovn-controller-xrbrq\" (UID: \"1f432695-8546-408b-a2f3-5c5df41a81cf\") " pod="openstack/ovn-controller-xrbrq" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.505084 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2de41a82-557b-4bac-8666-c0ebf0ce4676-var-run\") pod \"ovn-controller-ovs-xkmvw\" (UID: \"2de41a82-557b-4bac-8666-c0ebf0ce4676\") " pod="openstack/ovn-controller-ovs-xkmvw" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.508814 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f432695-8546-408b-a2f3-5c5df41a81cf-scripts\") pod \"ovn-controller-xrbrq\" (UID: \"1f432695-8546-408b-a2f3-5c5df41a81cf\") " pod="openstack/ovn-controller-xrbrq" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.508817 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2de41a82-557b-4bac-8666-c0ebf0ce4676-scripts\") pod \"ovn-controller-ovs-xkmvw\" (UID: \"2de41a82-557b-4bac-8666-c0ebf0ce4676\") " pod="openstack/ovn-controller-ovs-xkmvw" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.508980 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2de41a82-557b-4bac-8666-c0ebf0ce4676-var-log\") pod \"ovn-controller-ovs-xkmvw\" (UID: \"2de41a82-557b-4bac-8666-c0ebf0ce4676\") " pod="openstack/ovn-controller-ovs-xkmvw" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.510634 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1f432695-8546-408b-a2f3-5c5df41a81cf-var-run-ovn\") pod \"ovn-controller-xrbrq\" (UID: \"1f432695-8546-408b-a2f3-5c5df41a81cf\") " pod="openstack/ovn-controller-xrbrq" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.525429 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f432695-8546-408b-a2f3-5c5df41a81cf-ovn-controller-tls-certs\") pod \"ovn-controller-xrbrq\" (UID: \"1f432695-8546-408b-a2f3-5c5df41a81cf\") " pod="openstack/ovn-controller-xrbrq" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.525579 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f432695-8546-408b-a2f3-5c5df41a81cf-combined-ca-bundle\") pod \"ovn-controller-xrbrq\" (UID: \"1f432695-8546-408b-a2f3-5c5df41a81cf\") " pod="openstack/ovn-controller-xrbrq" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.529715 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxzg7\" (UniqueName: \"kubernetes.io/projected/1f432695-8546-408b-a2f3-5c5df41a81cf-kube-api-access-vxzg7\") pod \"ovn-controller-xrbrq\" (UID: \"1f432695-8546-408b-a2f3-5c5df41a81cf\") " pod="openstack/ovn-controller-xrbrq" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.530230 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txndd\" (UniqueName: \"kubernetes.io/projected/2de41a82-557b-4bac-8666-c0ebf0ce4676-kube-api-access-txndd\") pod \"ovn-controller-ovs-xkmvw\" (UID: \"2de41a82-557b-4bac-8666-c0ebf0ce4676\") " pod="openstack/ovn-controller-ovs-xkmvw" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.619126 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xrbrq" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.627869 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-xkmvw" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.638780 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.639918 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.649325 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.649983 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.650342 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.650626 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-cq8m7" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.650768 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.658112 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.810556 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/369e2dc8-5e71-4412-997f-d13e1c79eb73-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"369e2dc8-5e71-4412-997f-d13e1c79eb73\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.810591 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/369e2dc8-5e71-4412-997f-d13e1c79eb73-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"369e2dc8-5e71-4412-997f-d13e1c79eb73\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.810656 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"369e2dc8-5e71-4412-997f-d13e1c79eb73\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.810692 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/369e2dc8-5e71-4412-997f-d13e1c79eb73-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"369e2dc8-5e71-4412-997f-d13e1c79eb73\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.810720 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/369e2dc8-5e71-4412-997f-d13e1c79eb73-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"369e2dc8-5e71-4412-997f-d13e1c79eb73\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.810761 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/369e2dc8-5e71-4412-997f-d13e1c79eb73-config\") pod \"ovsdbserver-nb-0\" (UID: \"369e2dc8-5e71-4412-997f-d13e1c79eb73\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.810804 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92n5m\" (UniqueName: \"kubernetes.io/projected/369e2dc8-5e71-4412-997f-d13e1c79eb73-kube-api-access-92n5m\") pod \"ovsdbserver-nb-0\" (UID: \"369e2dc8-5e71-4412-997f-d13e1c79eb73\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.810911 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/369e2dc8-5e71-4412-997f-d13e1c79eb73-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"369e2dc8-5e71-4412-997f-d13e1c79eb73\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.913214 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/369e2dc8-5e71-4412-997f-d13e1c79eb73-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"369e2dc8-5e71-4412-997f-d13e1c79eb73\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.913286 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/369e2dc8-5e71-4412-997f-d13e1c79eb73-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"369e2dc8-5e71-4412-997f-d13e1c79eb73\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.913310 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/369e2dc8-5e71-4412-997f-d13e1c79eb73-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"369e2dc8-5e71-4412-997f-d13e1c79eb73\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.913386 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"369e2dc8-5e71-4412-997f-d13e1c79eb73\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.913414 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/369e2dc8-5e71-4412-997f-d13e1c79eb73-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"369e2dc8-5e71-4412-997f-d13e1c79eb73\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.913446 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/369e2dc8-5e71-4412-997f-d13e1c79eb73-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"369e2dc8-5e71-4412-997f-d13e1c79eb73\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.913469 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/369e2dc8-5e71-4412-997f-d13e1c79eb73-config\") pod \"ovsdbserver-nb-0\" (UID: \"369e2dc8-5e71-4412-997f-d13e1c79eb73\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.913503 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92n5m\" (UniqueName: \"kubernetes.io/projected/369e2dc8-5e71-4412-997f-d13e1c79eb73-kube-api-access-92n5m\") pod \"ovsdbserver-nb-0\" (UID: \"369e2dc8-5e71-4412-997f-d13e1c79eb73\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.914166 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/369e2dc8-5e71-4412-997f-d13e1c79eb73-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"369e2dc8-5e71-4412-997f-d13e1c79eb73\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.914410 4520 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"369e2dc8-5e71-4412-997f-d13e1c79eb73\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-nb-0" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.915040 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/369e2dc8-5e71-4412-997f-d13e1c79eb73-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"369e2dc8-5e71-4412-997f-d13e1c79eb73\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.916021 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/369e2dc8-5e71-4412-997f-d13e1c79eb73-config\") pod \"ovsdbserver-nb-0\" (UID: \"369e2dc8-5e71-4412-997f-d13e1c79eb73\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.934266 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/369e2dc8-5e71-4412-997f-d13e1c79eb73-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"369e2dc8-5e71-4412-997f-d13e1c79eb73\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.934952 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/369e2dc8-5e71-4412-997f-d13e1c79eb73-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"369e2dc8-5e71-4412-997f-d13e1c79eb73\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.936549 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92n5m\" (UniqueName: \"kubernetes.io/projected/369e2dc8-5e71-4412-997f-d13e1c79eb73-kube-api-access-92n5m\") pod \"ovsdbserver-nb-0\" (UID: \"369e2dc8-5e71-4412-997f-d13e1c79eb73\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.936826 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/369e2dc8-5e71-4412-997f-d13e1c79eb73-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"369e2dc8-5e71-4412-997f-d13e1c79eb73\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.938175 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"369e2dc8-5e71-4412-997f-d13e1c79eb73\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:58:18 crc kubenswrapper[4520]: I0130 06:58:18.968314 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 06:58:20 crc kubenswrapper[4520]: I0130 06:58:20.982977 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 06:58:20 crc kubenswrapper[4520]: I0130 06:58:20.985395 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 06:58:20 crc kubenswrapper[4520]: I0130 06:58:20.987441 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 30 06:58:20 crc kubenswrapper[4520]: I0130 06:58:20.988271 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 30 06:58:20 crc kubenswrapper[4520]: I0130 06:58:20.988480 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-qpzh7" Jan 30 06:58:20 crc kubenswrapper[4520]: I0130 06:58:20.990030 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 30 06:58:21 crc kubenswrapper[4520]: I0130 06:58:21.000723 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 06:58:21 crc kubenswrapper[4520]: I0130 06:58:21.063141 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2k78\" (UniqueName: \"kubernetes.io/projected/4835a990-9bee-4355-8057-8f2ab1218bc9-kube-api-access-p2k78\") pod \"ovsdbserver-sb-0\" (UID: \"4835a990-9bee-4355-8057-8f2ab1218bc9\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:58:21 crc kubenswrapper[4520]: I0130 06:58:21.063186 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4835a990-9bee-4355-8057-8f2ab1218bc9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4835a990-9bee-4355-8057-8f2ab1218bc9\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:58:21 crc kubenswrapper[4520]: I0130 06:58:21.063216 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4835a990-9bee-4355-8057-8f2ab1218bc9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4835a990-9bee-4355-8057-8f2ab1218bc9\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:58:21 crc kubenswrapper[4520]: I0130 06:58:21.063241 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4835a990-9bee-4355-8057-8f2ab1218bc9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4835a990-9bee-4355-8057-8f2ab1218bc9\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:58:21 crc kubenswrapper[4520]: I0130 06:58:21.063283 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4835a990-9bee-4355-8057-8f2ab1218bc9\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:58:21 crc kubenswrapper[4520]: I0130 06:58:21.064004 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4835a990-9bee-4355-8057-8f2ab1218bc9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4835a990-9bee-4355-8057-8f2ab1218bc9\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:58:21 crc kubenswrapper[4520]: I0130 06:58:21.064078 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4835a990-9bee-4355-8057-8f2ab1218bc9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4835a990-9bee-4355-8057-8f2ab1218bc9\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:58:21 crc kubenswrapper[4520]: I0130 06:58:21.064100 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4835a990-9bee-4355-8057-8f2ab1218bc9-config\") pod \"ovsdbserver-sb-0\" (UID: \"4835a990-9bee-4355-8057-8f2ab1218bc9\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:58:21 crc kubenswrapper[4520]: I0130 06:58:21.165351 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4835a990-9bee-4355-8057-8f2ab1218bc9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4835a990-9bee-4355-8057-8f2ab1218bc9\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:58:21 crc kubenswrapper[4520]: I0130 06:58:21.165405 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4835a990-9bee-4355-8057-8f2ab1218bc9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4835a990-9bee-4355-8057-8f2ab1218bc9\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:58:21 crc kubenswrapper[4520]: I0130 06:58:21.165432 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4835a990-9bee-4355-8057-8f2ab1218bc9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4835a990-9bee-4355-8057-8f2ab1218bc9\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:58:21 crc kubenswrapper[4520]: I0130 06:58:21.165479 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4835a990-9bee-4355-8057-8f2ab1218bc9\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:58:21 crc kubenswrapper[4520]: I0130 06:58:21.165509 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4835a990-9bee-4355-8057-8f2ab1218bc9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4835a990-9bee-4355-8057-8f2ab1218bc9\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:58:21 crc kubenswrapper[4520]: I0130 06:58:21.165578 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4835a990-9bee-4355-8057-8f2ab1218bc9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4835a990-9bee-4355-8057-8f2ab1218bc9\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:58:21 crc kubenswrapper[4520]: I0130 06:58:21.165595 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4835a990-9bee-4355-8057-8f2ab1218bc9-config\") pod \"ovsdbserver-sb-0\" (UID: \"4835a990-9bee-4355-8057-8f2ab1218bc9\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:58:21 crc kubenswrapper[4520]: I0130 06:58:21.165642 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2k78\" (UniqueName: \"kubernetes.io/projected/4835a990-9bee-4355-8057-8f2ab1218bc9-kube-api-access-p2k78\") pod \"ovsdbserver-sb-0\" (UID: \"4835a990-9bee-4355-8057-8f2ab1218bc9\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:58:21 crc kubenswrapper[4520]: I0130 06:58:21.166591 4520 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4835a990-9bee-4355-8057-8f2ab1218bc9\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-sb-0" Jan 30 06:58:21 crc kubenswrapper[4520]: I0130 06:58:21.167009 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4835a990-9bee-4355-8057-8f2ab1218bc9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4835a990-9bee-4355-8057-8f2ab1218bc9\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:58:21 crc kubenswrapper[4520]: I0130 06:58:21.167572 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4835a990-9bee-4355-8057-8f2ab1218bc9-config\") pod \"ovsdbserver-sb-0\" (UID: \"4835a990-9bee-4355-8057-8f2ab1218bc9\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:58:21 crc kubenswrapper[4520]: I0130 06:58:21.167748 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4835a990-9bee-4355-8057-8f2ab1218bc9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4835a990-9bee-4355-8057-8f2ab1218bc9\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:58:21 crc kubenswrapper[4520]: I0130 06:58:21.176680 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4835a990-9bee-4355-8057-8f2ab1218bc9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4835a990-9bee-4355-8057-8f2ab1218bc9\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:58:21 crc kubenswrapper[4520]: I0130 06:58:21.186158 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4835a990-9bee-4355-8057-8f2ab1218bc9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4835a990-9bee-4355-8057-8f2ab1218bc9\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:58:21 crc kubenswrapper[4520]: I0130 06:58:21.196199 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4835a990-9bee-4355-8057-8f2ab1218bc9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4835a990-9bee-4355-8057-8f2ab1218bc9\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:58:21 crc kubenswrapper[4520]: I0130 06:58:21.200125 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2k78\" (UniqueName: \"kubernetes.io/projected/4835a990-9bee-4355-8057-8f2ab1218bc9-kube-api-access-p2k78\") pod \"ovsdbserver-sb-0\" (UID: \"4835a990-9bee-4355-8057-8f2ab1218bc9\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:58:21 crc kubenswrapper[4520]: I0130 06:58:21.228488 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4835a990-9bee-4355-8057-8f2ab1218bc9\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:58:21 crc kubenswrapper[4520]: I0130 06:58:21.328134 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 06:58:27 crc kubenswrapper[4520]: I0130 06:58:27.793917 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:58:27 crc kubenswrapper[4520]: I0130 06:58:27.794713 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:58:31 crc kubenswrapper[4520]: E0130 06:58:31.537785 4520 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:b85d0548925081ae8c6bdd697658cec4" Jan 30 06:58:31 crc kubenswrapper[4520]: E0130 06:58:31.538170 4520 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:b85d0548925081ae8c6bdd697658cec4" Jan 30 06:58:31 crc kubenswrapper[4520]: E0130 06:58:31.538329 4520 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:b85d0548925081ae8c6bdd697658cec4,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7lbhk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(8b8c48de-512c-4fd1-b2de-e0e0a4fb8184): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 06:58:31 crc kubenswrapper[4520]: E0130 06:58:31.539666 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="8b8c48de-512c-4fd1-b2de-e0e0a4fb8184" Jan 30 06:58:31 crc kubenswrapper[4520]: E0130 06:58:31.542927 4520 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:b85d0548925081ae8c6bdd697658cec4" Jan 30 06:58:31 crc kubenswrapper[4520]: E0130 06:58:31.542976 4520 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:b85d0548925081ae8c6bdd697658cec4" Jan 30 06:58:31 crc kubenswrapper[4520]: E0130 06:58:31.543094 4520 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:b85d0548925081ae8c6bdd697658cec4,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7sjzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(fc4abc0f-2827-4636-9942-342593697905): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 06:58:31 crc kubenswrapper[4520]: E0130 06:58:31.544278 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="fc4abc0f-2827-4636-9942-342593697905" Jan 30 06:58:31 crc kubenswrapper[4520]: E0130 06:58:31.606937 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:b85d0548925081ae8c6bdd697658cec4\\\"\"" pod="openstack/rabbitmq-server-0" podUID="8b8c48de-512c-4fd1-b2de-e0e0a4fb8184" Jan 30 06:58:31 crc kubenswrapper[4520]: E0130 06:58:31.607288 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:b85d0548925081ae8c6bdd697658cec4\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="fc4abc0f-2827-4636-9942-342593697905" Jan 30 06:58:32 crc kubenswrapper[4520]: E0130 06:58:32.546928 4520 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:b85d0548925081ae8c6bdd697658cec4" Jan 30 06:58:32 crc kubenswrapper[4520]: E0130 06:58:32.546988 4520 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:b85d0548925081ae8c6bdd697658cec4" Jan 30 06:58:32 crc kubenswrapper[4520]: E0130 06:58:32.547103 4520 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:b85d0548925081ae8c6bdd697658cec4,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kjjgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-699648c449-ndkfx_openstack(947f0d01-6b92-4f3d-bb96-a1edf03651f1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 06:58:32 crc kubenswrapper[4520]: E0130 06:58:32.548835 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-699648c449-ndkfx" podUID="947f0d01-6b92-4f3d-bb96-a1edf03651f1" Jan 30 06:58:32 crc kubenswrapper[4520]: E0130 06:58:32.593619 4520 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:b85d0548925081ae8c6bdd697658cec4" Jan 30 06:58:32 crc kubenswrapper[4520]: E0130 06:58:32.593665 4520 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:b85d0548925081ae8c6bdd697658cec4" Jan 30 06:58:32 crc kubenswrapper[4520]: E0130 06:58:32.593778 4520 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:b85d0548925081ae8c6bdd697658cec4,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mmp9s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-d969c4c77-6zmdl_openstack(22de3ada-f978-40a4-a074-4a9d8730ce60): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 06:58:32 crc kubenswrapper[4520]: E0130 06:58:32.595130 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-d969c4c77-6zmdl" podUID="22de3ada-f978-40a4-a074-4a9d8730ce60" Jan 30 06:58:32 crc kubenswrapper[4520]: E0130 06:58:32.617124 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:b85d0548925081ae8c6bdd697658cec4\\\"\"" pod="openstack/dnsmasq-dns-699648c449-ndkfx" podUID="947f0d01-6b92-4f3d-bb96-a1edf03651f1" Jan 30 06:58:34 crc kubenswrapper[4520]: E0130 06:58:34.390854 4520 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:b85d0548925081ae8c6bdd697658cec4" Jan 30 06:58:34 crc kubenswrapper[4520]: E0130 06:58:34.391636 4520 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:b85d0548925081ae8c6bdd697658cec4" Jan 30 06:58:34 crc kubenswrapper[4520]: E0130 06:58:34.391789 4520 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:b85d0548925081ae8c6bdd697658cec4,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dfxg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-8d48b55b9-47phj_openstack(abb25131-80ba-42be-8e62-607dc6e04636): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 06:58:34 crc kubenswrapper[4520]: E0130 06:58:34.392975 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-8d48b55b9-47phj" podUID="abb25131-80ba-42be-8e62-607dc6e04636" Jan 30 06:58:35 crc kubenswrapper[4520]: I0130 06:58:35.184306 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d969c4c77-6zmdl" Jan 30 06:58:35 crc kubenswrapper[4520]: I0130 06:58:35.187320 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8d48b55b9-47phj" Jan 30 06:58:35 crc kubenswrapper[4520]: I0130 06:58:35.291533 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22de3ada-f978-40a4-a074-4a9d8730ce60-config\") pod \"22de3ada-f978-40a4-a074-4a9d8730ce60\" (UID: \"22de3ada-f978-40a4-a074-4a9d8730ce60\") " Jan 30 06:58:35 crc kubenswrapper[4520]: I0130 06:58:35.291591 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abb25131-80ba-42be-8e62-607dc6e04636-dns-svc\") pod \"abb25131-80ba-42be-8e62-607dc6e04636\" (UID: \"abb25131-80ba-42be-8e62-607dc6e04636\") " Jan 30 06:58:35 crc kubenswrapper[4520]: I0130 06:58:35.291642 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfxg5\" (UniqueName: \"kubernetes.io/projected/abb25131-80ba-42be-8e62-607dc6e04636-kube-api-access-dfxg5\") pod \"abb25131-80ba-42be-8e62-607dc6e04636\" (UID: \"abb25131-80ba-42be-8e62-607dc6e04636\") " Jan 30 06:58:35 crc kubenswrapper[4520]: I0130 06:58:35.291821 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmp9s\" (UniqueName: \"kubernetes.io/projected/22de3ada-f978-40a4-a074-4a9d8730ce60-kube-api-access-mmp9s\") pod \"22de3ada-f978-40a4-a074-4a9d8730ce60\" (UID: \"22de3ada-f978-40a4-a074-4a9d8730ce60\") " Jan 30 06:58:35 crc kubenswrapper[4520]: I0130 06:58:35.292004 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abb25131-80ba-42be-8e62-607dc6e04636-config\") pod \"abb25131-80ba-42be-8e62-607dc6e04636\" (UID: \"abb25131-80ba-42be-8e62-607dc6e04636\") " Jan 30 06:58:35 crc kubenswrapper[4520]: I0130 06:58:35.292379 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abb25131-80ba-42be-8e62-607dc6e04636-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "abb25131-80ba-42be-8e62-607dc6e04636" (UID: "abb25131-80ba-42be-8e62-607dc6e04636"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:58:35 crc kubenswrapper[4520]: I0130 06:58:35.292709 4520 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abb25131-80ba-42be-8e62-607dc6e04636-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:35 crc kubenswrapper[4520]: I0130 06:58:35.292868 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abb25131-80ba-42be-8e62-607dc6e04636-config" (OuterVolumeSpecName: "config") pod "abb25131-80ba-42be-8e62-607dc6e04636" (UID: "abb25131-80ba-42be-8e62-607dc6e04636"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:58:35 crc kubenswrapper[4520]: I0130 06:58:35.296290 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abb25131-80ba-42be-8e62-607dc6e04636-kube-api-access-dfxg5" (OuterVolumeSpecName: "kube-api-access-dfxg5") pod "abb25131-80ba-42be-8e62-607dc6e04636" (UID: "abb25131-80ba-42be-8e62-607dc6e04636"). InnerVolumeSpecName "kube-api-access-dfxg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:58:35 crc kubenswrapper[4520]: I0130 06:58:35.297257 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22de3ada-f978-40a4-a074-4a9d8730ce60-kube-api-access-mmp9s" (OuterVolumeSpecName: "kube-api-access-mmp9s") pod "22de3ada-f978-40a4-a074-4a9d8730ce60" (UID: "22de3ada-f978-40a4-a074-4a9d8730ce60"). InnerVolumeSpecName "kube-api-access-mmp9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:58:35 crc kubenswrapper[4520]: I0130 06:58:35.297308 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22de3ada-f978-40a4-a074-4a9d8730ce60-config" (OuterVolumeSpecName: "config") pod "22de3ada-f978-40a4-a074-4a9d8730ce60" (UID: "22de3ada-f978-40a4-a074-4a9d8730ce60"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:58:35 crc kubenswrapper[4520]: I0130 06:58:35.394427 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfxg5\" (UniqueName: \"kubernetes.io/projected/abb25131-80ba-42be-8e62-607dc6e04636-kube-api-access-dfxg5\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:35 crc kubenswrapper[4520]: I0130 06:58:35.394671 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmp9s\" (UniqueName: \"kubernetes.io/projected/22de3ada-f978-40a4-a074-4a9d8730ce60-kube-api-access-mmp9s\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:35 crc kubenswrapper[4520]: I0130 06:58:35.394684 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abb25131-80ba-42be-8e62-607dc6e04636-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:35 crc kubenswrapper[4520]: I0130 06:58:35.394695 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22de3ada-f978-40a4-a074-4a9d8730ce60-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:35 crc kubenswrapper[4520]: I0130 06:58:35.427861 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 06:58:35 crc kubenswrapper[4520]: I0130 06:58:35.638192 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d969c4c77-6zmdl" Jan 30 06:58:35 crc kubenswrapper[4520]: I0130 06:58:35.638226 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d969c4c77-6zmdl" event={"ID":"22de3ada-f978-40a4-a074-4a9d8730ce60","Type":"ContainerDied","Data":"444285496adce14cab5130b185febb5a2b83ef1011622ea4210313aa7c499834"} Jan 30 06:58:35 crc kubenswrapper[4520]: I0130 06:58:35.639145 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8d48b55b9-47phj" Jan 30 06:58:35 crc kubenswrapper[4520]: I0130 06:58:35.639140 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8d48b55b9-47phj" event={"ID":"abb25131-80ba-42be-8e62-607dc6e04636","Type":"ContainerDied","Data":"17354343caccc7dbb96fa38aa2aa6395df6c71b8315465fb8de11eda481cb946"} Jan 30 06:58:35 crc kubenswrapper[4520]: I0130 06:58:35.641991 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa","Type":"ContainerStarted","Data":"d39ad474613777e7caff8e99b576d3035d366429aeea9cbdfe03b5823cefc032"} Jan 30 06:58:35 crc kubenswrapper[4520]: I0130 06:58:35.648110 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"1c0133b5-7aa9-4181-b5e6-4eab077d801c","Type":"ContainerStarted","Data":"120e42f1f6f280a2b90e521122b2c6b282afeba83975d5fd97b72ca8c5b06da0"} Jan 30 06:58:35 crc kubenswrapper[4520]: I0130 06:58:35.648773 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 30 06:58:35 crc kubenswrapper[4520]: I0130 06:58:35.649803 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"808069be-f7b4-4e1c-86d2-585915e49a1f","Type":"ContainerStarted","Data":"46a8c4ccd615523a5ec960d4d0ba0817a0f7cb2101fb054eaf8a69f30d1f59be"} Jan 30 06:58:35 crc kubenswrapper[4520]: I0130 06:58:35.657022 4520 generic.go:334] "Generic (PLEG): container finished" podID="d8afe178-46ca-433c-8ce0-b0ab1fb61ffb" containerID="054e95b024f12293d22ef137d25c19135953766ffbf2a566e1ad91438676f6a0" exitCode=0 Jan 30 06:58:35 crc kubenswrapper[4520]: I0130 06:58:35.657224 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f744cb77-wjhmz" event={"ID":"d8afe178-46ca-433c-8ce0-b0ab1fb61ffb","Type":"ContainerDied","Data":"054e95b024f12293d22ef137d25c19135953766ffbf2a566e1ad91438676f6a0"} Jan 30 06:58:35 crc kubenswrapper[4520]: I0130 06:58:35.723071 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=1.9682085360000001 podStartE2EDuration="23.723055214s" podCreationTimestamp="2026-01-30 06:58:12 +0000 UTC" firstStartedPulling="2026-01-30 06:58:13.382848244 +0000 UTC m=+807.011200414" lastFinishedPulling="2026-01-30 06:58:35.13769492 +0000 UTC m=+828.766047092" observedRunningTime="2026-01-30 06:58:35.684821729 +0000 UTC m=+829.313173910" watchObservedRunningTime="2026-01-30 06:58:35.723055214 +0000 UTC m=+829.351407395" Jan 30 06:58:35 crc kubenswrapper[4520]: I0130 06:58:35.751748 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0f6edd3b-e0fe-4d2b-9e68-912425c0128e","Type":"ContainerStarted","Data":"f942e93743917cc96337bbca7c997da52e7e8391ff055df569c7cf3ea2234682"} Jan 30 06:58:35 crc kubenswrapper[4520]: I0130 06:58:35.765949 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xrbrq"] Jan 30 06:58:35 crc kubenswrapper[4520]: I0130 06:58:35.836488 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d969c4c77-6zmdl"] Jan 30 06:58:35 crc kubenswrapper[4520]: I0130 06:58:35.844044 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d969c4c77-6zmdl"] Jan 30 06:58:35 crc kubenswrapper[4520]: I0130 06:58:35.939148 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8d48b55b9-47phj"] Jan 30 06:58:35 crc kubenswrapper[4520]: I0130 06:58:35.944965 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8d48b55b9-47phj"] Jan 30 06:58:35 crc kubenswrapper[4520]: I0130 06:58:35.951738 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 06:58:35 crc kubenswrapper[4520]: I0130 06:58:35.974928 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-xkmvw"] Jan 30 06:58:36 crc kubenswrapper[4520]: I0130 06:58:36.579270 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 06:58:36 crc kubenswrapper[4520]: W0130 06:58:36.582833 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4835a990_9bee_4355_8057_8f2ab1218bc9.slice/crio-277620dac3bf49909e1204b08858df8b9020071fbffaa6e14bf1d363c4572a21 WatchSource:0}: Error finding container 277620dac3bf49909e1204b08858df8b9020071fbffaa6e14bf1d363c4572a21: Status 404 returned error can't find the container with id 277620dac3bf49909e1204b08858df8b9020071fbffaa6e14bf1d363c4572a21 Jan 30 06:58:36 crc kubenswrapper[4520]: I0130 06:58:36.699684 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22de3ada-f978-40a4-a074-4a9d8730ce60" path="/var/lib/kubelet/pods/22de3ada-f978-40a4-a074-4a9d8730ce60/volumes" Jan 30 06:58:36 crc kubenswrapper[4520]: I0130 06:58:36.700156 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abb25131-80ba-42be-8e62-607dc6e04636" path="/var/lib/kubelet/pods/abb25131-80ba-42be-8e62-607dc6e04636/volumes" Jan 30 06:58:36 crc kubenswrapper[4520]: I0130 06:58:36.781046 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xrbrq" event={"ID":"1f432695-8546-408b-a2f3-5c5df41a81cf","Type":"ContainerStarted","Data":"4d0d6e713cbb164a9a4ac7efcfcadf1d6dea9a9b95ab09180e7c26ddf516729c"} Jan 30 06:58:36 crc kubenswrapper[4520]: I0130 06:58:36.785098 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f744cb77-wjhmz" event={"ID":"d8afe178-46ca-433c-8ce0-b0ab1fb61ffb","Type":"ContainerStarted","Data":"5b5e46f8e765b0d0298793cb65b8c7a0f0aa626987ac8df52aaf177820bde4ca"} Jan 30 06:58:36 crc kubenswrapper[4520]: I0130 06:58:36.785414 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f744cb77-wjhmz" Jan 30 06:58:36 crc kubenswrapper[4520]: I0130 06:58:36.789712 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"369e2dc8-5e71-4412-997f-d13e1c79eb73","Type":"ContainerStarted","Data":"cc2be2344baf43abf9bd01dd0cd53889bcb1466cfe9c731199c5a360a2696b30"} Jan 30 06:58:36 crc kubenswrapper[4520]: I0130 06:58:36.795406 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xkmvw" event={"ID":"2de41a82-557b-4bac-8666-c0ebf0ce4676","Type":"ContainerStarted","Data":"19840f9513b152b5e9a0a7e9d5362358f3afe3efb4f6649ff5747802ce17d4e3"} Jan 30 06:58:36 crc kubenswrapper[4520]: I0130 06:58:36.799501 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4835a990-9bee-4355-8057-8f2ab1218bc9","Type":"ContainerStarted","Data":"277620dac3bf49909e1204b08858df8b9020071fbffaa6e14bf1d363c4572a21"} Jan 30 06:58:38 crc kubenswrapper[4520]: I0130 06:58:38.827690 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"808069be-f7b4-4e1c-86d2-585915e49a1f","Type":"ContainerStarted","Data":"72c4b506da37fc371c227be33163d16699d0b85599ffc83a6f4e642a05b2fe48"} Jan 30 06:58:38 crc kubenswrapper[4520]: I0130 06:58:38.828157 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 06:58:38 crc kubenswrapper[4520]: I0130 06:58:38.847305 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f744cb77-wjhmz" podStartSLOduration=5.049170396 podStartE2EDuration="30.847284344s" podCreationTimestamp="2026-01-30 06:58:08 +0000 UTC" firstStartedPulling="2026-01-30 06:58:09.35567626 +0000 UTC m=+802.984028441" lastFinishedPulling="2026-01-30 06:58:35.153790208 +0000 UTC m=+828.782142389" observedRunningTime="2026-01-30 06:58:36.815176122 +0000 UTC m=+830.443528304" watchObservedRunningTime="2026-01-30 06:58:38.847284344 +0000 UTC m=+832.475636525" Jan 30 06:58:38 crc kubenswrapper[4520]: I0130 06:58:38.848384 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=22.097037129 podStartE2EDuration="24.848379733s" podCreationTimestamp="2026-01-30 06:58:14 +0000 UTC" firstStartedPulling="2026-01-30 06:58:35.446093366 +0000 UTC m=+829.074445547" lastFinishedPulling="2026-01-30 06:58:38.19743597 +0000 UTC m=+831.825788151" observedRunningTime="2026-01-30 06:58:38.842641345 +0000 UTC m=+832.470993526" watchObservedRunningTime="2026-01-30 06:58:38.848379733 +0000 UTC m=+832.476731914" Jan 30 06:58:39 crc kubenswrapper[4520]: I0130 06:58:39.837620 4520 generic.go:334] "Generic (PLEG): container finished" podID="0f6edd3b-e0fe-4d2b-9e68-912425c0128e" containerID="f942e93743917cc96337bbca7c997da52e7e8391ff055df569c7cf3ea2234682" exitCode=0 Jan 30 06:58:39 crc kubenswrapper[4520]: I0130 06:58:39.837725 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0f6edd3b-e0fe-4d2b-9e68-912425c0128e","Type":"ContainerDied","Data":"f942e93743917cc96337bbca7c997da52e7e8391ff055df569c7cf3ea2234682"} Jan 30 06:58:39 crc kubenswrapper[4520]: I0130 06:58:39.843922 4520 generic.go:334] "Generic (PLEG): container finished" podID="4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa" containerID="d39ad474613777e7caff8e99b576d3035d366429aeea9cbdfe03b5823cefc032" exitCode=0 Jan 30 06:58:39 crc kubenswrapper[4520]: I0130 06:58:39.844012 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa","Type":"ContainerDied","Data":"d39ad474613777e7caff8e99b576d3035d366429aeea9cbdfe03b5823cefc032"} Jan 30 06:58:41 crc kubenswrapper[4520]: I0130 06:58:41.863615 4520 generic.go:334] "Generic (PLEG): container finished" podID="2de41a82-557b-4bac-8666-c0ebf0ce4676" containerID="07822dc89ba258233e379370b073c5ed4183f5341610cd0e68e8d7489216229f" exitCode=0 Jan 30 06:58:41 crc kubenswrapper[4520]: I0130 06:58:41.863998 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xkmvw" event={"ID":"2de41a82-557b-4bac-8666-c0ebf0ce4676","Type":"ContainerDied","Data":"07822dc89ba258233e379370b073c5ed4183f5341610cd0e68e8d7489216229f"} Jan 30 06:58:41 crc kubenswrapper[4520]: I0130 06:58:41.867685 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4835a990-9bee-4355-8057-8f2ab1218bc9","Type":"ContainerStarted","Data":"4166a16f56a15d8ab576ce5b747f1eba8538c24bd83baac36376add4cb9d7eaa"} Jan 30 06:58:41 crc kubenswrapper[4520]: I0130 06:58:41.871445 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"369e2dc8-5e71-4412-997f-d13e1c79eb73","Type":"ContainerStarted","Data":"f3f79912afdf352b15d81e910fa4f92bae139d2ed7bfa9cb2c3611a07058cf48"} Jan 30 06:58:41 crc kubenswrapper[4520]: I0130 06:58:41.876148 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0f6edd3b-e0fe-4d2b-9e68-912425c0128e","Type":"ContainerStarted","Data":"1080b106b85d3627a546f04d75c8a802b64c642eb66374fd2b6d3ff864941023"} Jan 30 06:58:41 crc kubenswrapper[4520]: I0130 06:58:41.879326 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa","Type":"ContainerStarted","Data":"ccc9ebeeb596cde108546303c05cee534a9ca8c66903c4e71b64ac83a3aaaf4b"} Jan 30 06:58:41 crc kubenswrapper[4520]: I0130 06:58:41.908247 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=9.44084798 podStartE2EDuration="32.908236038s" podCreationTimestamp="2026-01-30 06:58:09 +0000 UTC" firstStartedPulling="2026-01-30 06:58:11.664869191 +0000 UTC m=+805.293221372" lastFinishedPulling="2026-01-30 06:58:35.132257248 +0000 UTC m=+828.760609430" observedRunningTime="2026-01-30 06:58:41.905628707 +0000 UTC m=+835.533980888" watchObservedRunningTime="2026-01-30 06:58:41.908236038 +0000 UTC m=+835.536588220" Jan 30 06:58:41 crc kubenswrapper[4520]: I0130 06:58:41.934921 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=9.038405215 podStartE2EDuration="30.934902368s" podCreationTimestamp="2026-01-30 06:58:11 +0000 UTC" firstStartedPulling="2026-01-30 06:58:13.267829652 +0000 UTC m=+806.896181833" lastFinishedPulling="2026-01-30 06:58:35.164326806 +0000 UTC m=+828.792678986" observedRunningTime="2026-01-30 06:58:41.932768998 +0000 UTC m=+835.561121179" watchObservedRunningTime="2026-01-30 06:58:41.934902368 +0000 UTC m=+835.563254549" Jan 30 06:58:42 crc kubenswrapper[4520]: I0130 06:58:42.666498 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 30 06:58:42 crc kubenswrapper[4520]: I0130 06:58:42.666965 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 30 06:58:42 crc kubenswrapper[4520]: I0130 06:58:42.678595 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 30 06:58:42 crc kubenswrapper[4520]: I0130 06:58:42.888770 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xkmvw" event={"ID":"2de41a82-557b-4bac-8666-c0ebf0ce4676","Type":"ContainerStarted","Data":"74bd9c80277d9377c70a961d2e684377ec10f5138c21d30e723b815f5d1e089e"} Jan 30 06:58:42 crc kubenswrapper[4520]: I0130 06:58:42.888964 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-xkmvw" Jan 30 06:58:42 crc kubenswrapper[4520]: I0130 06:58:42.888980 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xkmvw" event={"ID":"2de41a82-557b-4bac-8666-c0ebf0ce4676","Type":"ContainerStarted","Data":"07b0e3447d1a6215ee82c24489cacc69a31c5ac34e19fee8c860a0b993a58c9b"} Jan 30 06:58:42 crc kubenswrapper[4520]: I0130 06:58:42.888995 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-xkmvw" Jan 30 06:58:42 crc kubenswrapper[4520]: I0130 06:58:42.892650 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xrbrq" event={"ID":"1f432695-8546-408b-a2f3-5c5df41a81cf","Type":"ContainerStarted","Data":"f9f5970d7929067ed4b85d8454b54cf17d099be48b08194c860ff9feef8dbd89"} Jan 30 06:58:42 crc kubenswrapper[4520]: I0130 06:58:42.910437 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-xkmvw" podStartSLOduration=20.113953653 podStartE2EDuration="24.910415797s" podCreationTimestamp="2026-01-30 06:58:18 +0000 UTC" firstStartedPulling="2026-01-30 06:58:35.977320565 +0000 UTC m=+829.605672747" lastFinishedPulling="2026-01-30 06:58:40.77378271 +0000 UTC m=+834.402134891" observedRunningTime="2026-01-30 06:58:42.909362438 +0000 UTC m=+836.537714619" watchObservedRunningTime="2026-01-30 06:58:42.910415797 +0000 UTC m=+836.538767979" Jan 30 06:58:42 crc kubenswrapper[4520]: I0130 06:58:42.924710 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-xrbrq" podStartSLOduration=19.117700605 podStartE2EDuration="24.924690853s" podCreationTimestamp="2026-01-30 06:58:18 +0000 UTC" firstStartedPulling="2026-01-30 06:58:35.787712813 +0000 UTC m=+829.416064994" lastFinishedPulling="2026-01-30 06:58:41.594703061 +0000 UTC m=+835.223055242" observedRunningTime="2026-01-30 06:58:42.923012878 +0000 UTC m=+836.551365059" watchObservedRunningTime="2026-01-30 06:58:42.924690853 +0000 UTC m=+836.553043034" Jan 30 06:58:43 crc kubenswrapper[4520]: I0130 06:58:43.623916 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-xrbrq" Jan 30 06:58:43 crc kubenswrapper[4520]: I0130 06:58:43.874759 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f744cb77-wjhmz" Jan 30 06:58:43 crc kubenswrapper[4520]: I0130 06:58:43.904266 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"369e2dc8-5e71-4412-997f-d13e1c79eb73","Type":"ContainerStarted","Data":"03cc96eb5924073a6b3f7abcf1f341fa584b0e8a4767dea26a3668a055bbccad"} Jan 30 06:58:43 crc kubenswrapper[4520]: I0130 06:58:43.907241 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4835a990-9bee-4355-8057-8f2ab1218bc9","Type":"ContainerStarted","Data":"21a810dabd0b9f7b72e0a591efb1fb4dcc5f6d3207d344c38db566c62ddbc8a0"} Jan 30 06:58:43 crc kubenswrapper[4520]: I0130 06:58:43.937649 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699648c449-ndkfx"] Jan 30 06:58:43 crc kubenswrapper[4520]: I0130 06:58:43.968952 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 30 06:58:43 crc kubenswrapper[4520]: I0130 06:58:43.974572 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=18.080014408 podStartE2EDuration="24.974555743s" podCreationTimestamp="2026-01-30 06:58:19 +0000 UTC" firstStartedPulling="2026-01-30 06:58:36.585318916 +0000 UTC m=+830.213671097" lastFinishedPulling="2026-01-30 06:58:43.47986025 +0000 UTC m=+837.108212432" observedRunningTime="2026-01-30 06:58:43.972757724 +0000 UTC m=+837.601109905" watchObservedRunningTime="2026-01-30 06:58:43.974555743 +0000 UTC m=+837.602907925" Jan 30 06:58:43 crc kubenswrapper[4520]: I0130 06:58:43.977295 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=19.428013909 podStartE2EDuration="26.97728245s" podCreationTimestamp="2026-01-30 06:58:17 +0000 UTC" firstStartedPulling="2026-01-30 06:58:35.932532418 +0000 UTC m=+829.560884599" lastFinishedPulling="2026-01-30 06:58:43.481800959 +0000 UTC m=+837.110153140" observedRunningTime="2026-01-30 06:58:43.950450299 +0000 UTC m=+837.578802480" watchObservedRunningTime="2026-01-30 06:58:43.97728245 +0000 UTC m=+837.605634631" Jan 30 06:58:44 crc kubenswrapper[4520]: I0130 06:58:44.231065 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699648c449-ndkfx" Jan 30 06:58:44 crc kubenswrapper[4520]: I0130 06:58:44.362295 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/947f0d01-6b92-4f3d-bb96-a1edf03651f1-dns-svc\") pod \"947f0d01-6b92-4f3d-bb96-a1edf03651f1\" (UID: \"947f0d01-6b92-4f3d-bb96-a1edf03651f1\") " Jan 30 06:58:44 crc kubenswrapper[4520]: I0130 06:58:44.362405 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/947f0d01-6b92-4f3d-bb96-a1edf03651f1-config\") pod \"947f0d01-6b92-4f3d-bb96-a1edf03651f1\" (UID: \"947f0d01-6b92-4f3d-bb96-a1edf03651f1\") " Jan 30 06:58:44 crc kubenswrapper[4520]: I0130 06:58:44.362508 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjjgx\" (UniqueName: \"kubernetes.io/projected/947f0d01-6b92-4f3d-bb96-a1edf03651f1-kube-api-access-kjjgx\") pod \"947f0d01-6b92-4f3d-bb96-a1edf03651f1\" (UID: \"947f0d01-6b92-4f3d-bb96-a1edf03651f1\") " Jan 30 06:58:44 crc kubenswrapper[4520]: I0130 06:58:44.363092 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/947f0d01-6b92-4f3d-bb96-a1edf03651f1-config" (OuterVolumeSpecName: "config") pod "947f0d01-6b92-4f3d-bb96-a1edf03651f1" (UID: "947f0d01-6b92-4f3d-bb96-a1edf03651f1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:58:44 crc kubenswrapper[4520]: I0130 06:58:44.363546 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/947f0d01-6b92-4f3d-bb96-a1edf03651f1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "947f0d01-6b92-4f3d-bb96-a1edf03651f1" (UID: "947f0d01-6b92-4f3d-bb96-a1edf03651f1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:58:44 crc kubenswrapper[4520]: I0130 06:58:44.368486 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/947f0d01-6b92-4f3d-bb96-a1edf03651f1-kube-api-access-kjjgx" (OuterVolumeSpecName: "kube-api-access-kjjgx") pod "947f0d01-6b92-4f3d-bb96-a1edf03651f1" (UID: "947f0d01-6b92-4f3d-bb96-a1edf03651f1"). InnerVolumeSpecName "kube-api-access-kjjgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:58:44 crc kubenswrapper[4520]: I0130 06:58:44.465257 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/947f0d01-6b92-4f3d-bb96-a1edf03651f1-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:44 crc kubenswrapper[4520]: I0130 06:58:44.465395 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjjgx\" (UniqueName: \"kubernetes.io/projected/947f0d01-6b92-4f3d-bb96-a1edf03651f1-kube-api-access-kjjgx\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:44 crc kubenswrapper[4520]: I0130 06:58:44.465472 4520 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/947f0d01-6b92-4f3d-bb96-a1edf03651f1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:44 crc kubenswrapper[4520]: I0130 06:58:44.682573 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 06:58:44 crc kubenswrapper[4520]: I0130 06:58:44.723298 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f4cf86b6f-dd654"] Jan 30 06:58:44 crc kubenswrapper[4520]: I0130 06:58:44.724972 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f4cf86b6f-dd654" Jan 30 06:58:44 crc kubenswrapper[4520]: I0130 06:58:44.751216 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f4cf86b6f-dd654"] Jan 30 06:58:44 crc kubenswrapper[4520]: I0130 06:58:44.873730 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjsbl\" (UniqueName: \"kubernetes.io/projected/cf090d59-44c4-4f21-a255-3eb4e3e6b64a-kube-api-access-fjsbl\") pod \"dnsmasq-dns-7f4cf86b6f-dd654\" (UID: \"cf090d59-44c4-4f21-a255-3eb4e3e6b64a\") " pod="openstack/dnsmasq-dns-7f4cf86b6f-dd654" Jan 30 06:58:44 crc kubenswrapper[4520]: I0130 06:58:44.874145 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf090d59-44c4-4f21-a255-3eb4e3e6b64a-dns-svc\") pod \"dnsmasq-dns-7f4cf86b6f-dd654\" (UID: \"cf090d59-44c4-4f21-a255-3eb4e3e6b64a\") " pod="openstack/dnsmasq-dns-7f4cf86b6f-dd654" Jan 30 06:58:44 crc kubenswrapper[4520]: I0130 06:58:44.874239 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf090d59-44c4-4f21-a255-3eb4e3e6b64a-config\") pod \"dnsmasq-dns-7f4cf86b6f-dd654\" (UID: \"cf090d59-44c4-4f21-a255-3eb4e3e6b64a\") " pod="openstack/dnsmasq-dns-7f4cf86b6f-dd654" Jan 30 06:58:44 crc kubenswrapper[4520]: I0130 06:58:44.914928 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699648c449-ndkfx" Jan 30 06:58:44 crc kubenswrapper[4520]: I0130 06:58:44.915328 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699648c449-ndkfx" event={"ID":"947f0d01-6b92-4f3d-bb96-a1edf03651f1","Type":"ContainerDied","Data":"a3e169a742f513f0651ae999a838f64d8941bd3f02f950124cb766b441123831"} Jan 30 06:58:44 crc kubenswrapper[4520]: I0130 06:58:44.944932 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699648c449-ndkfx"] Jan 30 06:58:44 crc kubenswrapper[4520]: I0130 06:58:44.954614 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-699648c449-ndkfx"] Jan 30 06:58:44 crc kubenswrapper[4520]: I0130 06:58:44.975726 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf090d59-44c4-4f21-a255-3eb4e3e6b64a-dns-svc\") pod \"dnsmasq-dns-7f4cf86b6f-dd654\" (UID: \"cf090d59-44c4-4f21-a255-3eb4e3e6b64a\") " pod="openstack/dnsmasq-dns-7f4cf86b6f-dd654" Jan 30 06:58:44 crc kubenswrapper[4520]: I0130 06:58:44.975800 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf090d59-44c4-4f21-a255-3eb4e3e6b64a-config\") pod \"dnsmasq-dns-7f4cf86b6f-dd654\" (UID: \"cf090d59-44c4-4f21-a255-3eb4e3e6b64a\") " pod="openstack/dnsmasq-dns-7f4cf86b6f-dd654" Jan 30 06:58:44 crc kubenswrapper[4520]: I0130 06:58:44.975860 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjsbl\" (UniqueName: \"kubernetes.io/projected/cf090d59-44c4-4f21-a255-3eb4e3e6b64a-kube-api-access-fjsbl\") pod \"dnsmasq-dns-7f4cf86b6f-dd654\" (UID: \"cf090d59-44c4-4f21-a255-3eb4e3e6b64a\") " pod="openstack/dnsmasq-dns-7f4cf86b6f-dd654" Jan 30 06:58:44 crc kubenswrapper[4520]: I0130 06:58:44.979455 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf090d59-44c4-4f21-a255-3eb4e3e6b64a-config\") pod \"dnsmasq-dns-7f4cf86b6f-dd654\" (UID: \"cf090d59-44c4-4f21-a255-3eb4e3e6b64a\") " pod="openstack/dnsmasq-dns-7f4cf86b6f-dd654" Jan 30 06:58:44 crc kubenswrapper[4520]: I0130 06:58:44.979765 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf090d59-44c4-4f21-a255-3eb4e3e6b64a-dns-svc\") pod \"dnsmasq-dns-7f4cf86b6f-dd654\" (UID: \"cf090d59-44c4-4f21-a255-3eb4e3e6b64a\") " pod="openstack/dnsmasq-dns-7f4cf86b6f-dd654" Jan 30 06:58:45 crc kubenswrapper[4520]: I0130 06:58:45.011432 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjsbl\" (UniqueName: \"kubernetes.io/projected/cf090d59-44c4-4f21-a255-3eb4e3e6b64a-kube-api-access-fjsbl\") pod \"dnsmasq-dns-7f4cf86b6f-dd654\" (UID: \"cf090d59-44c4-4f21-a255-3eb4e3e6b64a\") " pod="openstack/dnsmasq-dns-7f4cf86b6f-dd654" Jan 30 06:58:45 crc kubenswrapper[4520]: I0130 06:58:45.046442 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f4cf86b6f-dd654" Jan 30 06:58:45 crc kubenswrapper[4520]: I0130 06:58:45.329579 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 30 06:58:45 crc kubenswrapper[4520]: I0130 06:58:45.364388 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 30 06:58:45 crc kubenswrapper[4520]: I0130 06:58:45.481888 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f4cf86b6f-dd654"] Jan 30 06:58:45 crc kubenswrapper[4520]: I0130 06:58:45.846234 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 30 06:58:45 crc kubenswrapper[4520]: I0130 06:58:45.850607 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 06:58:45 crc kubenswrapper[4520]: I0130 06:58:45.852117 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 30 06:58:45 crc kubenswrapper[4520]: I0130 06:58:45.852554 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-gd4qr" Jan 30 06:58:45 crc kubenswrapper[4520]: I0130 06:58:45.854040 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 30 06:58:45 crc kubenswrapper[4520]: I0130 06:58:45.854428 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 30 06:58:45 crc kubenswrapper[4520]: I0130 06:58:45.877483 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 06:58:45 crc kubenswrapper[4520]: I0130 06:58:45.921249 4520 generic.go:334] "Generic (PLEG): container finished" podID="cf090d59-44c4-4f21-a255-3eb4e3e6b64a" containerID="58cf71aa32d6cbabb0aa6aa905bf419cf955a28c872c313eb6b2e72f0d06cdce" exitCode=0 Jan 30 06:58:45 crc kubenswrapper[4520]: I0130 06:58:45.921290 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f4cf86b6f-dd654" event={"ID":"cf090d59-44c4-4f21-a255-3eb4e3e6b64a","Type":"ContainerDied","Data":"58cf71aa32d6cbabb0aa6aa905bf419cf955a28c872c313eb6b2e72f0d06cdce"} Jan 30 06:58:45 crc kubenswrapper[4520]: I0130 06:58:45.922219 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f4cf86b6f-dd654" event={"ID":"cf090d59-44c4-4f21-a255-3eb4e3e6b64a","Type":"ContainerStarted","Data":"7c0eb8a70828336daf138f8bacca679b77ce2b8eb05baeaddce97c2009f88c77"} Jan 30 06:58:45 crc kubenswrapper[4520]: I0130 06:58:45.924419 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 30 06:58:45 crc kubenswrapper[4520]: I0130 06:58:45.968766 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 30 06:58:45 crc kubenswrapper[4520]: I0130 06:58:45.997084 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh7wb\" (UniqueName: \"kubernetes.io/projected/1d0bd1d1-935d-458c-9cf8-c11455791a64-kube-api-access-zh7wb\") pod \"swift-storage-0\" (UID: \"1d0bd1d1-935d-458c-9cf8-c11455791a64\") " pod="openstack/swift-storage-0" Jan 30 06:58:45 crc kubenswrapper[4520]: I0130 06:58:45.997134 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1d0bd1d1-935d-458c-9cf8-c11455791a64-etc-swift\") pod \"swift-storage-0\" (UID: \"1d0bd1d1-935d-458c-9cf8-c11455791a64\") " pod="openstack/swift-storage-0" Jan 30 06:58:45 crc kubenswrapper[4520]: I0130 06:58:45.997170 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1d0bd1d1-935d-458c-9cf8-c11455791a64-cache\") pod \"swift-storage-0\" (UID: \"1d0bd1d1-935d-458c-9cf8-c11455791a64\") " pod="openstack/swift-storage-0" Jan 30 06:58:45 crc kubenswrapper[4520]: I0130 06:58:45.997209 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1d0bd1d1-935d-458c-9cf8-c11455791a64-lock\") pod \"swift-storage-0\" (UID: \"1d0bd1d1-935d-458c-9cf8-c11455791a64\") " pod="openstack/swift-storage-0" Jan 30 06:58:45 crc kubenswrapper[4520]: I0130 06:58:45.997269 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"1d0bd1d1-935d-458c-9cf8-c11455791a64\") " pod="openstack/swift-storage-0" Jan 30 06:58:45 crc kubenswrapper[4520]: I0130 06:58:45.997507 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d0bd1d1-935d-458c-9cf8-c11455791a64-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"1d0bd1d1-935d-458c-9cf8-c11455791a64\") " pod="openstack/swift-storage-0" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.004437 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.082037 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-97w8x"] Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.083235 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-97w8x" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.085269 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.085502 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.087817 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.100628 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh7wb\" (UniqueName: \"kubernetes.io/projected/1d0bd1d1-935d-458c-9cf8-c11455791a64-kube-api-access-zh7wb\") pod \"swift-storage-0\" (UID: \"1d0bd1d1-935d-458c-9cf8-c11455791a64\") " pod="openstack/swift-storage-0" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.100683 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1d0bd1d1-935d-458c-9cf8-c11455791a64-etc-swift\") pod \"swift-storage-0\" (UID: \"1d0bd1d1-935d-458c-9cf8-c11455791a64\") " pod="openstack/swift-storage-0" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.100706 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1d0bd1d1-935d-458c-9cf8-c11455791a64-cache\") pod \"swift-storage-0\" (UID: \"1d0bd1d1-935d-458c-9cf8-c11455791a64\") " pod="openstack/swift-storage-0" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.100728 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1d0bd1d1-935d-458c-9cf8-c11455791a64-lock\") pod \"swift-storage-0\" (UID: \"1d0bd1d1-935d-458c-9cf8-c11455791a64\") " pod="openstack/swift-storage-0" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.100780 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"1d0bd1d1-935d-458c-9cf8-c11455791a64\") " pod="openstack/swift-storage-0" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.100844 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d0bd1d1-935d-458c-9cf8-c11455791a64-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"1d0bd1d1-935d-458c-9cf8-c11455791a64\") " pod="openstack/swift-storage-0" Jan 30 06:58:46 crc kubenswrapper[4520]: E0130 06:58:46.103314 4520 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 06:58:46 crc kubenswrapper[4520]: E0130 06:58:46.103336 4520 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 06:58:46 crc kubenswrapper[4520]: E0130 06:58:46.103372 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d0bd1d1-935d-458c-9cf8-c11455791a64-etc-swift podName:1d0bd1d1-935d-458c-9cf8-c11455791a64 nodeName:}" failed. No retries permitted until 2026-01-30 06:58:46.603356738 +0000 UTC m=+840.231708919 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1d0bd1d1-935d-458c-9cf8-c11455791a64-etc-swift") pod "swift-storage-0" (UID: "1d0bd1d1-935d-458c-9cf8-c11455791a64") : configmap "swift-ring-files" not found Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.103886 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1d0bd1d1-935d-458c-9cf8-c11455791a64-cache\") pod \"swift-storage-0\" (UID: \"1d0bd1d1-935d-458c-9cf8-c11455791a64\") " pod="openstack/swift-storage-0" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.104250 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1d0bd1d1-935d-458c-9cf8-c11455791a64-lock\") pod \"swift-storage-0\" (UID: \"1d0bd1d1-935d-458c-9cf8-c11455791a64\") " pod="openstack/swift-storage-0" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.104693 4520 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"1d0bd1d1-935d-458c-9cf8-c11455791a64\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/swift-storage-0" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.108143 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d0bd1d1-935d-458c-9cf8-c11455791a64-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"1d0bd1d1-935d-458c-9cf8-c11455791a64\") " pod="openstack/swift-storage-0" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.119615 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-97w8x"] Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.125424 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh7wb\" (UniqueName: \"kubernetes.io/projected/1d0bd1d1-935d-458c-9cf8-c11455791a64-kube-api-access-zh7wb\") pod \"swift-storage-0\" (UID: \"1d0bd1d1-935d-458c-9cf8-c11455791a64\") " pod="openstack/swift-storage-0" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.127407 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"1d0bd1d1-935d-458c-9cf8-c11455791a64\") " pod="openstack/swift-storage-0" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.202129 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49410419-7629-431c-9f17-b66263889ede-combined-ca-bundle\") pod \"swift-ring-rebalance-97w8x\" (UID: \"49410419-7629-431c-9f17-b66263889ede\") " pod="openstack/swift-ring-rebalance-97w8x" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.202175 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/49410419-7629-431c-9f17-b66263889ede-scripts\") pod \"swift-ring-rebalance-97w8x\" (UID: \"49410419-7629-431c-9f17-b66263889ede\") " pod="openstack/swift-ring-rebalance-97w8x" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.202209 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/49410419-7629-431c-9f17-b66263889ede-dispersionconf\") pod \"swift-ring-rebalance-97w8x\" (UID: \"49410419-7629-431c-9f17-b66263889ede\") " pod="openstack/swift-ring-rebalance-97w8x" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.202300 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/49410419-7629-431c-9f17-b66263889ede-etc-swift\") pod \"swift-ring-rebalance-97w8x\" (UID: \"49410419-7629-431c-9f17-b66263889ede\") " pod="openstack/swift-ring-rebalance-97w8x" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.202367 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/49410419-7629-431c-9f17-b66263889ede-ring-data-devices\") pod \"swift-ring-rebalance-97w8x\" (UID: \"49410419-7629-431c-9f17-b66263889ede\") " pod="openstack/swift-ring-rebalance-97w8x" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.202383 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/49410419-7629-431c-9f17-b66263889ede-swiftconf\") pod \"swift-ring-rebalance-97w8x\" (UID: \"49410419-7629-431c-9f17-b66263889ede\") " pod="openstack/swift-ring-rebalance-97w8x" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.202413 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhtqx\" (UniqueName: \"kubernetes.io/projected/49410419-7629-431c-9f17-b66263889ede-kube-api-access-dhtqx\") pod \"swift-ring-rebalance-97w8x\" (UID: \"49410419-7629-431c-9f17-b66263889ede\") " pod="openstack/swift-ring-rebalance-97w8x" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.303837 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/49410419-7629-431c-9f17-b66263889ede-etc-swift\") pod \"swift-ring-rebalance-97w8x\" (UID: \"49410419-7629-431c-9f17-b66263889ede\") " pod="openstack/swift-ring-rebalance-97w8x" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.303924 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/49410419-7629-431c-9f17-b66263889ede-ring-data-devices\") pod \"swift-ring-rebalance-97w8x\" (UID: \"49410419-7629-431c-9f17-b66263889ede\") " pod="openstack/swift-ring-rebalance-97w8x" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.303944 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/49410419-7629-431c-9f17-b66263889ede-swiftconf\") pod \"swift-ring-rebalance-97w8x\" (UID: \"49410419-7629-431c-9f17-b66263889ede\") " pod="openstack/swift-ring-rebalance-97w8x" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.303973 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhtqx\" (UniqueName: \"kubernetes.io/projected/49410419-7629-431c-9f17-b66263889ede-kube-api-access-dhtqx\") pod \"swift-ring-rebalance-97w8x\" (UID: \"49410419-7629-431c-9f17-b66263889ede\") " pod="openstack/swift-ring-rebalance-97w8x" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.304081 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49410419-7629-431c-9f17-b66263889ede-combined-ca-bundle\") pod \"swift-ring-rebalance-97w8x\" (UID: \"49410419-7629-431c-9f17-b66263889ede\") " pod="openstack/swift-ring-rebalance-97w8x" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.304108 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/49410419-7629-431c-9f17-b66263889ede-scripts\") pod \"swift-ring-rebalance-97w8x\" (UID: \"49410419-7629-431c-9f17-b66263889ede\") " pod="openstack/swift-ring-rebalance-97w8x" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.304134 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/49410419-7629-431c-9f17-b66263889ede-dispersionconf\") pod \"swift-ring-rebalance-97w8x\" (UID: \"49410419-7629-431c-9f17-b66263889ede\") " pod="openstack/swift-ring-rebalance-97w8x" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.304292 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/49410419-7629-431c-9f17-b66263889ede-etc-swift\") pod \"swift-ring-rebalance-97w8x\" (UID: \"49410419-7629-431c-9f17-b66263889ede\") " pod="openstack/swift-ring-rebalance-97w8x" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.304830 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/49410419-7629-431c-9f17-b66263889ede-scripts\") pod \"swift-ring-rebalance-97w8x\" (UID: \"49410419-7629-431c-9f17-b66263889ede\") " pod="openstack/swift-ring-rebalance-97w8x" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.305771 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/49410419-7629-431c-9f17-b66263889ede-ring-data-devices\") pod \"swift-ring-rebalance-97w8x\" (UID: \"49410419-7629-431c-9f17-b66263889ede\") " pod="openstack/swift-ring-rebalance-97w8x" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.308152 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/49410419-7629-431c-9f17-b66263889ede-swiftconf\") pod \"swift-ring-rebalance-97w8x\" (UID: \"49410419-7629-431c-9f17-b66263889ede\") " pod="openstack/swift-ring-rebalance-97w8x" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.308168 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/49410419-7629-431c-9f17-b66263889ede-dispersionconf\") pod \"swift-ring-rebalance-97w8x\" (UID: \"49410419-7629-431c-9f17-b66263889ede\") " pod="openstack/swift-ring-rebalance-97w8x" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.308776 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49410419-7629-431c-9f17-b66263889ede-combined-ca-bundle\") pod \"swift-ring-rebalance-97w8x\" (UID: \"49410419-7629-431c-9f17-b66263889ede\") " pod="openstack/swift-ring-rebalance-97w8x" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.316671 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhtqx\" (UniqueName: \"kubernetes.io/projected/49410419-7629-431c-9f17-b66263889ede-kube-api-access-dhtqx\") pod \"swift-ring-rebalance-97w8x\" (UID: \"49410419-7629-431c-9f17-b66263889ede\") " pod="openstack/swift-ring-rebalance-97w8x" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.357333 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.399247 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-97w8x" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.615079 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1d0bd1d1-935d-458c-9cf8-c11455791a64-etc-swift\") pod \"swift-storage-0\" (UID: \"1d0bd1d1-935d-458c-9cf8-c11455791a64\") " pod="openstack/swift-storage-0" Jan 30 06:58:46 crc kubenswrapper[4520]: E0130 06:58:46.615674 4520 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 06:58:46 crc kubenswrapper[4520]: E0130 06:58:46.615692 4520 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 06:58:46 crc kubenswrapper[4520]: E0130 06:58:46.615734 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d0bd1d1-935d-458c-9cf8-c11455791a64-etc-swift podName:1d0bd1d1-935d-458c-9cf8-c11455791a64 nodeName:}" failed. No retries permitted until 2026-01-30 06:58:47.615720557 +0000 UTC m=+841.244072738 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1d0bd1d1-935d-458c-9cf8-c11455791a64-etc-swift") pod "swift-storage-0" (UID: "1d0bd1d1-935d-458c-9cf8-c11455791a64") : configmap "swift-ring-files" not found Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.725110 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="947f0d01-6b92-4f3d-bb96-a1edf03651f1" path="/var/lib/kubelet/pods/947f0d01-6b92-4f3d-bb96-a1edf03651f1/volumes" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.725760 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f4cf86b6f-dd654"] Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.763474 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57b8bf797-vl52c"] Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.766716 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57b8bf797-vl52c" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.768132 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.780892 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57b8bf797-vl52c"] Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.822768 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-wnvg5"] Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.825078 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-wnvg5" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.829137 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.831695 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-wnvg5"] Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.858438 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-97w8x"] Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.922328 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/cf948014-fede-4dc7-a484-545bb9ad7470-ovn-rundir\") pod \"ovn-controller-metrics-wnvg5\" (UID: \"cf948014-fede-4dc7-a484-545bb9ad7470\") " pod="openstack/ovn-controller-metrics-wnvg5" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.922382 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/cf948014-fede-4dc7-a484-545bb9ad7470-ovs-rundir\") pod \"ovn-controller-metrics-wnvg5\" (UID: \"cf948014-fede-4dc7-a484-545bb9ad7470\") " pod="openstack/ovn-controller-metrics-wnvg5" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.922405 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvc4v\" (UniqueName: \"kubernetes.io/projected/cf948014-fede-4dc7-a484-545bb9ad7470-kube-api-access-wvc4v\") pod \"ovn-controller-metrics-wnvg5\" (UID: \"cf948014-fede-4dc7-a484-545bb9ad7470\") " pod="openstack/ovn-controller-metrics-wnvg5" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.922427 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf948014-fede-4dc7-a484-545bb9ad7470-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wnvg5\" (UID: \"cf948014-fede-4dc7-a484-545bb9ad7470\") " pod="openstack/ovn-controller-metrics-wnvg5" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.922468 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf948014-fede-4dc7-a484-545bb9ad7470-config\") pod \"ovn-controller-metrics-wnvg5\" (UID: \"cf948014-fede-4dc7-a484-545bb9ad7470\") " pod="openstack/ovn-controller-metrics-wnvg5" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.922495 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7743ce8-ab4f-4639-96f8-12d4eef3a560-dns-svc\") pod \"dnsmasq-dns-57b8bf797-vl52c\" (UID: \"d7743ce8-ab4f-4639-96f8-12d4eef3a560\") " pod="openstack/dnsmasq-dns-57b8bf797-vl52c" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.922532 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7743ce8-ab4f-4639-96f8-12d4eef3a560-config\") pod \"dnsmasq-dns-57b8bf797-vl52c\" (UID: \"d7743ce8-ab4f-4639-96f8-12d4eef3a560\") " pod="openstack/dnsmasq-dns-57b8bf797-vl52c" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.922551 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf948014-fede-4dc7-a484-545bb9ad7470-combined-ca-bundle\") pod \"ovn-controller-metrics-wnvg5\" (UID: \"cf948014-fede-4dc7-a484-545bb9ad7470\") " pod="openstack/ovn-controller-metrics-wnvg5" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.922576 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7743ce8-ab4f-4639-96f8-12d4eef3a560-ovsdbserver-sb\") pod \"dnsmasq-dns-57b8bf797-vl52c\" (UID: \"d7743ce8-ab4f-4639-96f8-12d4eef3a560\") " pod="openstack/dnsmasq-dns-57b8bf797-vl52c" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.922601 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brdkp\" (UniqueName: \"kubernetes.io/projected/d7743ce8-ab4f-4639-96f8-12d4eef3a560-kube-api-access-brdkp\") pod \"dnsmasq-dns-57b8bf797-vl52c\" (UID: \"d7743ce8-ab4f-4639-96f8-12d4eef3a560\") " pod="openstack/dnsmasq-dns-57b8bf797-vl52c" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.931026 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f4cf86b6f-dd654" event={"ID":"cf090d59-44c4-4f21-a255-3eb4e3e6b64a","Type":"ContainerStarted","Data":"ce7f58d635068e102d705394c67935d344f2cb3be01a28c9d034375fc83bfd46"} Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.932639 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-97w8x" event={"ID":"49410419-7629-431c-9f17-b66263889ede","Type":"ContainerStarted","Data":"37651600b8566d4c343cddc258c7d7ab364de2d8a1ecedc86ccc0673d1b32403"} Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.949172 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f4cf86b6f-dd654" podStartSLOduration=2.949134008 podStartE2EDuration="2.949134008s" podCreationTimestamp="2026-01-30 06:58:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:58:46.947073344 +0000 UTC m=+840.575425525" watchObservedRunningTime="2026-01-30 06:58:46.949134008 +0000 UTC m=+840.577486189" Jan 30 06:58:46 crc kubenswrapper[4520]: I0130 06:58:46.983208 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.023984 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/cf948014-fede-4dc7-a484-545bb9ad7470-ovn-rundir\") pod \"ovn-controller-metrics-wnvg5\" (UID: \"cf948014-fede-4dc7-a484-545bb9ad7470\") " pod="openstack/ovn-controller-metrics-wnvg5" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.024080 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/cf948014-fede-4dc7-a484-545bb9ad7470-ovs-rundir\") pod \"ovn-controller-metrics-wnvg5\" (UID: \"cf948014-fede-4dc7-a484-545bb9ad7470\") " pod="openstack/ovn-controller-metrics-wnvg5" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.024099 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvc4v\" (UniqueName: \"kubernetes.io/projected/cf948014-fede-4dc7-a484-545bb9ad7470-kube-api-access-wvc4v\") pod \"ovn-controller-metrics-wnvg5\" (UID: \"cf948014-fede-4dc7-a484-545bb9ad7470\") " pod="openstack/ovn-controller-metrics-wnvg5" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.024118 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf948014-fede-4dc7-a484-545bb9ad7470-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wnvg5\" (UID: \"cf948014-fede-4dc7-a484-545bb9ad7470\") " pod="openstack/ovn-controller-metrics-wnvg5" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.024688 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/cf948014-fede-4dc7-a484-545bb9ad7470-ovs-rundir\") pod \"ovn-controller-metrics-wnvg5\" (UID: \"cf948014-fede-4dc7-a484-545bb9ad7470\") " pod="openstack/ovn-controller-metrics-wnvg5" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.024840 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/cf948014-fede-4dc7-a484-545bb9ad7470-ovn-rundir\") pod \"ovn-controller-metrics-wnvg5\" (UID: \"cf948014-fede-4dc7-a484-545bb9ad7470\") " pod="openstack/ovn-controller-metrics-wnvg5" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.026225 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf948014-fede-4dc7-a484-545bb9ad7470-config\") pod \"ovn-controller-metrics-wnvg5\" (UID: \"cf948014-fede-4dc7-a484-545bb9ad7470\") " pod="openstack/ovn-controller-metrics-wnvg5" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.026466 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7743ce8-ab4f-4639-96f8-12d4eef3a560-dns-svc\") pod \"dnsmasq-dns-57b8bf797-vl52c\" (UID: \"d7743ce8-ab4f-4639-96f8-12d4eef3a560\") " pod="openstack/dnsmasq-dns-57b8bf797-vl52c" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.026575 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7743ce8-ab4f-4639-96f8-12d4eef3a560-config\") pod \"dnsmasq-dns-57b8bf797-vl52c\" (UID: \"d7743ce8-ab4f-4639-96f8-12d4eef3a560\") " pod="openstack/dnsmasq-dns-57b8bf797-vl52c" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.026608 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf948014-fede-4dc7-a484-545bb9ad7470-combined-ca-bundle\") pod \"ovn-controller-metrics-wnvg5\" (UID: \"cf948014-fede-4dc7-a484-545bb9ad7470\") " pod="openstack/ovn-controller-metrics-wnvg5" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.026667 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7743ce8-ab4f-4639-96f8-12d4eef3a560-ovsdbserver-sb\") pod \"dnsmasq-dns-57b8bf797-vl52c\" (UID: \"d7743ce8-ab4f-4639-96f8-12d4eef3a560\") " pod="openstack/dnsmasq-dns-57b8bf797-vl52c" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.026765 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brdkp\" (UniqueName: \"kubernetes.io/projected/d7743ce8-ab4f-4639-96f8-12d4eef3a560-kube-api-access-brdkp\") pod \"dnsmasq-dns-57b8bf797-vl52c\" (UID: \"d7743ce8-ab4f-4639-96f8-12d4eef3a560\") " pod="openstack/dnsmasq-dns-57b8bf797-vl52c" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.028033 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf948014-fede-4dc7-a484-545bb9ad7470-config\") pod \"ovn-controller-metrics-wnvg5\" (UID: \"cf948014-fede-4dc7-a484-545bb9ad7470\") " pod="openstack/ovn-controller-metrics-wnvg5" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.028127 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7743ce8-ab4f-4639-96f8-12d4eef3a560-dns-svc\") pod \"dnsmasq-dns-57b8bf797-vl52c\" (UID: \"d7743ce8-ab4f-4639-96f8-12d4eef3a560\") " pod="openstack/dnsmasq-dns-57b8bf797-vl52c" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.028926 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7743ce8-ab4f-4639-96f8-12d4eef3a560-ovsdbserver-sb\") pod \"dnsmasq-dns-57b8bf797-vl52c\" (UID: \"d7743ce8-ab4f-4639-96f8-12d4eef3a560\") " pod="openstack/dnsmasq-dns-57b8bf797-vl52c" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.029406 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7743ce8-ab4f-4639-96f8-12d4eef3a560-config\") pod \"dnsmasq-dns-57b8bf797-vl52c\" (UID: \"d7743ce8-ab4f-4639-96f8-12d4eef3a560\") " pod="openstack/dnsmasq-dns-57b8bf797-vl52c" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.043874 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvc4v\" (UniqueName: \"kubernetes.io/projected/cf948014-fede-4dc7-a484-545bb9ad7470-kube-api-access-wvc4v\") pod \"ovn-controller-metrics-wnvg5\" (UID: \"cf948014-fede-4dc7-a484-545bb9ad7470\") " pod="openstack/ovn-controller-metrics-wnvg5" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.044715 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf948014-fede-4dc7-a484-545bb9ad7470-combined-ca-bundle\") pod \"ovn-controller-metrics-wnvg5\" (UID: \"cf948014-fede-4dc7-a484-545bb9ad7470\") " pod="openstack/ovn-controller-metrics-wnvg5" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.048256 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brdkp\" (UniqueName: \"kubernetes.io/projected/d7743ce8-ab4f-4639-96f8-12d4eef3a560-kube-api-access-brdkp\") pod \"dnsmasq-dns-57b8bf797-vl52c\" (UID: \"d7743ce8-ab4f-4639-96f8-12d4eef3a560\") " pod="openstack/dnsmasq-dns-57b8bf797-vl52c" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.054412 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf948014-fede-4dc7-a484-545bb9ad7470-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wnvg5\" (UID: \"cf948014-fede-4dc7-a484-545bb9ad7470\") " pod="openstack/ovn-controller-metrics-wnvg5" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.091400 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57b8bf797-vl52c" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.109435 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57b8bf797-vl52c"] Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.136778 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-wnvg5" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.178312 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd8468b69-r99hr"] Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.180468 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd8468b69-r99hr" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.183613 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.203052 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd8468b69-r99hr"] Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.239755 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4fdb7e3-5390-4912-8331-36f326f97d7c-config\") pod \"dnsmasq-dns-cd8468b69-r99hr\" (UID: \"d4fdb7e3-5390-4912-8331-36f326f97d7c\") " pod="openstack/dnsmasq-dns-cd8468b69-r99hr" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.239815 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4fdb7e3-5390-4912-8331-36f326f97d7c-dns-svc\") pod \"dnsmasq-dns-cd8468b69-r99hr\" (UID: \"d4fdb7e3-5390-4912-8331-36f326f97d7c\") " pod="openstack/dnsmasq-dns-cd8468b69-r99hr" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.239846 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4fdb7e3-5390-4912-8331-36f326f97d7c-ovsdbserver-sb\") pod \"dnsmasq-dns-cd8468b69-r99hr\" (UID: \"d4fdb7e3-5390-4912-8331-36f326f97d7c\") " pod="openstack/dnsmasq-dns-cd8468b69-r99hr" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.239877 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbr69\" (UniqueName: \"kubernetes.io/projected/d4fdb7e3-5390-4912-8331-36f326f97d7c-kube-api-access-nbr69\") pod \"dnsmasq-dns-cd8468b69-r99hr\" (UID: \"d4fdb7e3-5390-4912-8331-36f326f97d7c\") " pod="openstack/dnsmasq-dns-cd8468b69-r99hr" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.239970 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4fdb7e3-5390-4912-8331-36f326f97d7c-ovsdbserver-nb\") pod \"dnsmasq-dns-cd8468b69-r99hr\" (UID: \"d4fdb7e3-5390-4912-8331-36f326f97d7c\") " pod="openstack/dnsmasq-dns-cd8468b69-r99hr" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.344558 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4fdb7e3-5390-4912-8331-36f326f97d7c-ovsdbserver-nb\") pod \"dnsmasq-dns-cd8468b69-r99hr\" (UID: \"d4fdb7e3-5390-4912-8331-36f326f97d7c\") " pod="openstack/dnsmasq-dns-cd8468b69-r99hr" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.344686 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4fdb7e3-5390-4912-8331-36f326f97d7c-config\") pod \"dnsmasq-dns-cd8468b69-r99hr\" (UID: \"d4fdb7e3-5390-4912-8331-36f326f97d7c\") " pod="openstack/dnsmasq-dns-cd8468b69-r99hr" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.344746 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4fdb7e3-5390-4912-8331-36f326f97d7c-dns-svc\") pod \"dnsmasq-dns-cd8468b69-r99hr\" (UID: \"d4fdb7e3-5390-4912-8331-36f326f97d7c\") " pod="openstack/dnsmasq-dns-cd8468b69-r99hr" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.344775 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4fdb7e3-5390-4912-8331-36f326f97d7c-ovsdbserver-sb\") pod \"dnsmasq-dns-cd8468b69-r99hr\" (UID: \"d4fdb7e3-5390-4912-8331-36f326f97d7c\") " pod="openstack/dnsmasq-dns-cd8468b69-r99hr" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.344812 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbr69\" (UniqueName: \"kubernetes.io/projected/d4fdb7e3-5390-4912-8331-36f326f97d7c-kube-api-access-nbr69\") pod \"dnsmasq-dns-cd8468b69-r99hr\" (UID: \"d4fdb7e3-5390-4912-8331-36f326f97d7c\") " pod="openstack/dnsmasq-dns-cd8468b69-r99hr" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.346322 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4fdb7e3-5390-4912-8331-36f326f97d7c-ovsdbserver-nb\") pod \"dnsmasq-dns-cd8468b69-r99hr\" (UID: \"d4fdb7e3-5390-4912-8331-36f326f97d7c\") " pod="openstack/dnsmasq-dns-cd8468b69-r99hr" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.346947 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4fdb7e3-5390-4912-8331-36f326f97d7c-ovsdbserver-sb\") pod \"dnsmasq-dns-cd8468b69-r99hr\" (UID: \"d4fdb7e3-5390-4912-8331-36f326f97d7c\") " pod="openstack/dnsmasq-dns-cd8468b69-r99hr" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.347093 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4fdb7e3-5390-4912-8331-36f326f97d7c-dns-svc\") pod \"dnsmasq-dns-cd8468b69-r99hr\" (UID: \"d4fdb7e3-5390-4912-8331-36f326f97d7c\") " pod="openstack/dnsmasq-dns-cd8468b69-r99hr" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.347500 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4fdb7e3-5390-4912-8331-36f326f97d7c-config\") pod \"dnsmasq-dns-cd8468b69-r99hr\" (UID: \"d4fdb7e3-5390-4912-8331-36f326f97d7c\") " pod="openstack/dnsmasq-dns-cd8468b69-r99hr" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.366624 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbr69\" (UniqueName: \"kubernetes.io/projected/d4fdb7e3-5390-4912-8331-36f326f97d7c-kube-api-access-nbr69\") pod \"dnsmasq-dns-cd8468b69-r99hr\" (UID: \"d4fdb7e3-5390-4912-8331-36f326f97d7c\") " pod="openstack/dnsmasq-dns-cd8468b69-r99hr" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.501809 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd8468b69-r99hr" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.595880 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.612840 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.616766 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.616958 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.617065 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-9w22t" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.619225 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.619705 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.659124 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1d0bd1d1-935d-458c-9cf8-c11455791a64-etc-swift\") pod \"swift-storage-0\" (UID: \"1d0bd1d1-935d-458c-9cf8-c11455791a64\") " pod="openstack/swift-storage-0" Jan 30 06:58:47 crc kubenswrapper[4520]: E0130 06:58:47.659533 4520 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 06:58:47 crc kubenswrapper[4520]: E0130 06:58:47.659553 4520 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 06:58:47 crc kubenswrapper[4520]: E0130 06:58:47.659596 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d0bd1d1-935d-458c-9cf8-c11455791a64-etc-swift podName:1d0bd1d1-935d-458c-9cf8-c11455791a64 nodeName:}" failed. No retries permitted until 2026-01-30 06:58:49.659580931 +0000 UTC m=+843.287933112 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1d0bd1d1-935d-458c-9cf8-c11455791a64-etc-swift") pod "swift-storage-0" (UID: "1d0bd1d1-935d-458c-9cf8-c11455791a64") : configmap "swift-ring-files" not found Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.671494 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57b8bf797-vl52c"] Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.735411 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-wnvg5"] Jan 30 06:58:47 crc kubenswrapper[4520]: W0130 06:58:47.741211 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf948014_fede_4dc7_a484_545bb9ad7470.slice/crio-604c48e5efffc1ff817669fb5d2276981a389b55f1a661aee36da08cea5b056a WatchSource:0}: Error finding container 604c48e5efffc1ff817669fb5d2276981a389b55f1a661aee36da08cea5b056a: Status 404 returned error can't find the container with id 604c48e5efffc1ff817669fb5d2276981a389b55f1a661aee36da08cea5b056a Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.762663 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0e0c178-3ca9-4112-a7eb-d013ed5107a2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d0e0c178-3ca9-4112-a7eb-d013ed5107a2\") " pod="openstack/ovn-northd-0" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.765441 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0e0c178-3ca9-4112-a7eb-d013ed5107a2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d0e0c178-3ca9-4112-a7eb-d013ed5107a2\") " pod="openstack/ovn-northd-0" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.765466 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dspwm\" (UniqueName: \"kubernetes.io/projected/d0e0c178-3ca9-4112-a7eb-d013ed5107a2-kube-api-access-dspwm\") pod \"ovn-northd-0\" (UID: \"d0e0c178-3ca9-4112-a7eb-d013ed5107a2\") " pod="openstack/ovn-northd-0" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.765537 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d0e0c178-3ca9-4112-a7eb-d013ed5107a2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d0e0c178-3ca9-4112-a7eb-d013ed5107a2\") " pod="openstack/ovn-northd-0" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.765558 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e0c178-3ca9-4112-a7eb-d013ed5107a2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d0e0c178-3ca9-4112-a7eb-d013ed5107a2\") " pod="openstack/ovn-northd-0" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.765582 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d0e0c178-3ca9-4112-a7eb-d013ed5107a2-scripts\") pod \"ovn-northd-0\" (UID: \"d0e0c178-3ca9-4112-a7eb-d013ed5107a2\") " pod="openstack/ovn-northd-0" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.765608 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0e0c178-3ca9-4112-a7eb-d013ed5107a2-config\") pod \"ovn-northd-0\" (UID: \"d0e0c178-3ca9-4112-a7eb-d013ed5107a2\") " pod="openstack/ovn-northd-0" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.867740 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0e0c178-3ca9-4112-a7eb-d013ed5107a2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d0e0c178-3ca9-4112-a7eb-d013ed5107a2\") " pod="openstack/ovn-northd-0" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.868014 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0e0c178-3ca9-4112-a7eb-d013ed5107a2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d0e0c178-3ca9-4112-a7eb-d013ed5107a2\") " pod="openstack/ovn-northd-0" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.868036 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dspwm\" (UniqueName: \"kubernetes.io/projected/d0e0c178-3ca9-4112-a7eb-d013ed5107a2-kube-api-access-dspwm\") pod \"ovn-northd-0\" (UID: \"d0e0c178-3ca9-4112-a7eb-d013ed5107a2\") " pod="openstack/ovn-northd-0" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.868081 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d0e0c178-3ca9-4112-a7eb-d013ed5107a2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d0e0c178-3ca9-4112-a7eb-d013ed5107a2\") " pod="openstack/ovn-northd-0" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.868100 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e0c178-3ca9-4112-a7eb-d013ed5107a2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d0e0c178-3ca9-4112-a7eb-d013ed5107a2\") " pod="openstack/ovn-northd-0" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.868124 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d0e0c178-3ca9-4112-a7eb-d013ed5107a2-scripts\") pod \"ovn-northd-0\" (UID: \"d0e0c178-3ca9-4112-a7eb-d013ed5107a2\") " pod="openstack/ovn-northd-0" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.868149 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0e0c178-3ca9-4112-a7eb-d013ed5107a2-config\") pod \"ovn-northd-0\" (UID: \"d0e0c178-3ca9-4112-a7eb-d013ed5107a2\") " pod="openstack/ovn-northd-0" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.868806 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d0e0c178-3ca9-4112-a7eb-d013ed5107a2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d0e0c178-3ca9-4112-a7eb-d013ed5107a2\") " pod="openstack/ovn-northd-0" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.869118 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0e0c178-3ca9-4112-a7eb-d013ed5107a2-config\") pod \"ovn-northd-0\" (UID: \"d0e0c178-3ca9-4112-a7eb-d013ed5107a2\") " pod="openstack/ovn-northd-0" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.869439 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d0e0c178-3ca9-4112-a7eb-d013ed5107a2-scripts\") pod \"ovn-northd-0\" (UID: \"d0e0c178-3ca9-4112-a7eb-d013ed5107a2\") " pod="openstack/ovn-northd-0" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.946966 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-wnvg5" event={"ID":"cf948014-fede-4dc7-a484-545bb9ad7470","Type":"ContainerStarted","Data":"604c48e5efffc1ff817669fb5d2276981a389b55f1a661aee36da08cea5b056a"} Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.952318 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57b8bf797-vl52c" event={"ID":"d7743ce8-ab4f-4639-96f8-12d4eef3a560","Type":"ContainerStarted","Data":"1b8eaf047cd657137a76b177c29cfa6c20b114aee9d8fbfe43806820d1995cc4"} Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.952414 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57b8bf797-vl52c" event={"ID":"d7743ce8-ab4f-4639-96f8-12d4eef3a560","Type":"ContainerStarted","Data":"98a9708f630b8d49d62782891907f1bd576ac203549163f02bba8f82e991f560"} Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.952424 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f4cf86b6f-dd654" podUID="cf090d59-44c4-4f21-a255-3eb4e3e6b64a" containerName="dnsmasq-dns" containerID="cri-o://ce7f58d635068e102d705394c67935d344f2cb3be01a28c9d034375fc83bfd46" gracePeriod=10 Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.952550 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f4cf86b6f-dd654" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.953210 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57b8bf797-vl52c" podUID="d7743ce8-ab4f-4639-96f8-12d4eef3a560" containerName="init" containerID="cri-o://1b8eaf047cd657137a76b177c29cfa6c20b114aee9d8fbfe43806820d1995cc4" gracePeriod=10 Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.963978 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e0c178-3ca9-4112-a7eb-d013ed5107a2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d0e0c178-3ca9-4112-a7eb-d013ed5107a2\") " pod="openstack/ovn-northd-0" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.967881 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0e0c178-3ca9-4112-a7eb-d013ed5107a2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d0e0c178-3ca9-4112-a7eb-d013ed5107a2\") " pod="openstack/ovn-northd-0" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.969193 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0e0c178-3ca9-4112-a7eb-d013ed5107a2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d0e0c178-3ca9-4112-a7eb-d013ed5107a2\") " pod="openstack/ovn-northd-0" Jan 30 06:58:47 crc kubenswrapper[4520]: I0130 06:58:47.970902 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dspwm\" (UniqueName: \"kubernetes.io/projected/d0e0c178-3ca9-4112-a7eb-d013ed5107a2-kube-api-access-dspwm\") pod \"ovn-northd-0\" (UID: \"d0e0c178-3ca9-4112-a7eb-d013ed5107a2\") " pod="openstack/ovn-northd-0" Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.097990 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd8468b69-r99hr"] Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.239998 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 06:58:48 crc kubenswrapper[4520]: W0130 06:58:48.265242 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4fdb7e3_5390_4912_8331_36f326f97d7c.slice/crio-ed40fe0a23169aea0c2aa9b155854b54ea17e75a347c8b94d30473b81c8ed399 WatchSource:0}: Error finding container ed40fe0a23169aea0c2aa9b155854b54ea17e75a347c8b94d30473b81c8ed399: Status 404 returned error can't find the container with id ed40fe0a23169aea0c2aa9b155854b54ea17e75a347c8b94d30473b81c8ed399 Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.531802 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57b8bf797-vl52c" Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.593925 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7743ce8-ab4f-4639-96f8-12d4eef3a560-config\") pod \"d7743ce8-ab4f-4639-96f8-12d4eef3a560\" (UID: \"d7743ce8-ab4f-4639-96f8-12d4eef3a560\") " Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.594178 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7743ce8-ab4f-4639-96f8-12d4eef3a560-ovsdbserver-sb\") pod \"d7743ce8-ab4f-4639-96f8-12d4eef3a560\" (UID: \"d7743ce8-ab4f-4639-96f8-12d4eef3a560\") " Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.594366 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brdkp\" (UniqueName: \"kubernetes.io/projected/d7743ce8-ab4f-4639-96f8-12d4eef3a560-kube-api-access-brdkp\") pod \"d7743ce8-ab4f-4639-96f8-12d4eef3a560\" (UID: \"d7743ce8-ab4f-4639-96f8-12d4eef3a560\") " Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.594403 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7743ce8-ab4f-4639-96f8-12d4eef3a560-dns-svc\") pod \"d7743ce8-ab4f-4639-96f8-12d4eef3a560\" (UID: \"d7743ce8-ab4f-4639-96f8-12d4eef3a560\") " Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.615112 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7743ce8-ab4f-4639-96f8-12d4eef3a560-kube-api-access-brdkp" (OuterVolumeSpecName: "kube-api-access-brdkp") pod "d7743ce8-ab4f-4639-96f8-12d4eef3a560" (UID: "d7743ce8-ab4f-4639-96f8-12d4eef3a560"). InnerVolumeSpecName "kube-api-access-brdkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.671530 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7743ce8-ab4f-4639-96f8-12d4eef3a560-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d7743ce8-ab4f-4639-96f8-12d4eef3a560" (UID: "d7743ce8-ab4f-4639-96f8-12d4eef3a560"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.687234 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7743ce8-ab4f-4639-96f8-12d4eef3a560-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d7743ce8-ab4f-4639-96f8-12d4eef3a560" (UID: "d7743ce8-ab4f-4639-96f8-12d4eef3a560"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.688805 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7743ce8-ab4f-4639-96f8-12d4eef3a560-config" (OuterVolumeSpecName: "config") pod "d7743ce8-ab4f-4639-96f8-12d4eef3a560" (UID: "d7743ce8-ab4f-4639-96f8-12d4eef3a560"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.698331 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brdkp\" (UniqueName: \"kubernetes.io/projected/d7743ce8-ab4f-4639-96f8-12d4eef3a560-kube-api-access-brdkp\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.698353 4520 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7743ce8-ab4f-4639-96f8-12d4eef3a560-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.698363 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7743ce8-ab4f-4639-96f8-12d4eef3a560-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.698372 4520 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7743ce8-ab4f-4639-96f8-12d4eef3a560-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.729353 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f4cf86b6f-dd654" Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.798814 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf090d59-44c4-4f21-a255-3eb4e3e6b64a-config\") pod \"cf090d59-44c4-4f21-a255-3eb4e3e6b64a\" (UID: \"cf090d59-44c4-4f21-a255-3eb4e3e6b64a\") " Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.798894 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjsbl\" (UniqueName: \"kubernetes.io/projected/cf090d59-44c4-4f21-a255-3eb4e3e6b64a-kube-api-access-fjsbl\") pod \"cf090d59-44c4-4f21-a255-3eb4e3e6b64a\" (UID: \"cf090d59-44c4-4f21-a255-3eb4e3e6b64a\") " Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.798941 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf090d59-44c4-4f21-a255-3eb4e3e6b64a-dns-svc\") pod \"cf090d59-44c4-4f21-a255-3eb4e3e6b64a\" (UID: \"cf090d59-44c4-4f21-a255-3eb4e3e6b64a\") " Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.824970 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf090d59-44c4-4f21-a255-3eb4e3e6b64a-kube-api-access-fjsbl" (OuterVolumeSpecName: "kube-api-access-fjsbl") pod "cf090d59-44c4-4f21-a255-3eb4e3e6b64a" (UID: "cf090d59-44c4-4f21-a255-3eb4e3e6b64a"). InnerVolumeSpecName "kube-api-access-fjsbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.834996 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.843605 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf090d59-44c4-4f21-a255-3eb4e3e6b64a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cf090d59-44c4-4f21-a255-3eb4e3e6b64a" (UID: "cf090d59-44c4-4f21-a255-3eb4e3e6b64a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.849594 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf090d59-44c4-4f21-a255-3eb4e3e6b64a-config" (OuterVolumeSpecName: "config") pod "cf090d59-44c4-4f21-a255-3eb4e3e6b64a" (UID: "cf090d59-44c4-4f21-a255-3eb4e3e6b64a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.901423 4520 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf090d59-44c4-4f21-a255-3eb4e3e6b64a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.901448 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf090d59-44c4-4f21-a255-3eb4e3e6b64a-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.901458 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjsbl\" (UniqueName: \"kubernetes.io/projected/cf090d59-44c4-4f21-a255-3eb4e3e6b64a-kube-api-access-fjsbl\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.915613 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.928633 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.966348 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-wnvg5" event={"ID":"cf948014-fede-4dc7-a484-545bb9ad7470","Type":"ContainerStarted","Data":"6c9193b01c8414305234de31637f5c0dea8eb0876124752c126e3d4c2512d188"} Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.968585 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d0e0c178-3ca9-4112-a7eb-d013ed5107a2","Type":"ContainerStarted","Data":"e6deb0ba20fbc9adfc7eac470fd30f21717f167ca71c0a7820635f1c1f86f728"} Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.971442 4520 generic.go:334] "Generic (PLEG): container finished" podID="d7743ce8-ab4f-4639-96f8-12d4eef3a560" containerID="1b8eaf047cd657137a76b177c29cfa6c20b114aee9d8fbfe43806820d1995cc4" exitCode=0 Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.971534 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57b8bf797-vl52c" Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.972059 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57b8bf797-vl52c" event={"ID":"d7743ce8-ab4f-4639-96f8-12d4eef3a560","Type":"ContainerDied","Data":"1b8eaf047cd657137a76b177c29cfa6c20b114aee9d8fbfe43806820d1995cc4"} Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.972078 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57b8bf797-vl52c" event={"ID":"d7743ce8-ab4f-4639-96f8-12d4eef3a560","Type":"ContainerDied","Data":"98a9708f630b8d49d62782891907f1bd576ac203549163f02bba8f82e991f560"} Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.972093 4520 scope.go:117] "RemoveContainer" containerID="1b8eaf047cd657137a76b177c29cfa6c20b114aee9d8fbfe43806820d1995cc4" Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.976042 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184","Type":"ContainerStarted","Data":"163e771c24eeb7d5133bc8d1013b839f3e5ccdaa9f64759d7a1ab8384a1b0f44"} Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.977625 4520 generic.go:334] "Generic (PLEG): container finished" podID="d4fdb7e3-5390-4912-8331-36f326f97d7c" containerID="3e8493e02642e4746126afe47a4c4e5277c49f14e2050b01ab0eaafa48569ab5" exitCode=0 Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.977677 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd8468b69-r99hr" event={"ID":"d4fdb7e3-5390-4912-8331-36f326f97d7c","Type":"ContainerDied","Data":"3e8493e02642e4746126afe47a4c4e5277c49f14e2050b01ab0eaafa48569ab5"} Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.977693 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd8468b69-r99hr" event={"ID":"d4fdb7e3-5390-4912-8331-36f326f97d7c","Type":"ContainerStarted","Data":"ed40fe0a23169aea0c2aa9b155854b54ea17e75a347c8b94d30473b81c8ed399"} Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.984325 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-wnvg5" podStartSLOduration=2.984317531 podStartE2EDuration="2.984317531s" podCreationTimestamp="2026-01-30 06:58:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:58:48.978968134 +0000 UTC m=+842.607320306" watchObservedRunningTime="2026-01-30 06:58:48.984317531 +0000 UTC m=+842.612669713" Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.995030 4520 generic.go:334] "Generic (PLEG): container finished" podID="cf090d59-44c4-4f21-a255-3eb4e3e6b64a" containerID="ce7f58d635068e102d705394c67935d344f2cb3be01a28c9d034375fc83bfd46" exitCode=0 Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.995097 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f4cf86b6f-dd654" event={"ID":"cf090d59-44c4-4f21-a255-3eb4e3e6b64a","Type":"ContainerDied","Data":"ce7f58d635068e102d705394c67935d344f2cb3be01a28c9d034375fc83bfd46"} Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.995124 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f4cf86b6f-dd654" event={"ID":"cf090d59-44c4-4f21-a255-3eb4e3e6b64a","Type":"ContainerDied","Data":"7c0eb8a70828336daf138f8bacca679b77ce2b8eb05baeaddce97c2009f88c77"} Jan 30 06:58:48 crc kubenswrapper[4520]: I0130 06:58:48.995178 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f4cf86b6f-dd654" Jan 30 06:58:49 crc kubenswrapper[4520]: I0130 06:58:49.008554 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fc4abc0f-2827-4636-9942-342593697905","Type":"ContainerStarted","Data":"d7a6d151df430a61dcc4b3c25d238a677c1755d79bfba40e96fd5f6557baebe2"} Jan 30 06:58:49 crc kubenswrapper[4520]: I0130 06:58:49.036159 4520 scope.go:117] "RemoveContainer" containerID="1b8eaf047cd657137a76b177c29cfa6c20b114aee9d8fbfe43806820d1995cc4" Jan 30 06:58:49 crc kubenswrapper[4520]: E0130 06:58:49.039096 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b8eaf047cd657137a76b177c29cfa6c20b114aee9d8fbfe43806820d1995cc4\": container with ID starting with 1b8eaf047cd657137a76b177c29cfa6c20b114aee9d8fbfe43806820d1995cc4 not found: ID does not exist" containerID="1b8eaf047cd657137a76b177c29cfa6c20b114aee9d8fbfe43806820d1995cc4" Jan 30 06:58:49 crc kubenswrapper[4520]: I0130 06:58:49.039151 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b8eaf047cd657137a76b177c29cfa6c20b114aee9d8fbfe43806820d1995cc4"} err="failed to get container status \"1b8eaf047cd657137a76b177c29cfa6c20b114aee9d8fbfe43806820d1995cc4\": rpc error: code = NotFound desc = could not find container \"1b8eaf047cd657137a76b177c29cfa6c20b114aee9d8fbfe43806820d1995cc4\": container with ID starting with 1b8eaf047cd657137a76b177c29cfa6c20b114aee9d8fbfe43806820d1995cc4 not found: ID does not exist" Jan 30 06:58:49 crc kubenswrapper[4520]: I0130 06:58:49.040014 4520 scope.go:117] "RemoveContainer" containerID="ce7f58d635068e102d705394c67935d344f2cb3be01a28c9d034375fc83bfd46" Jan 30 06:58:49 crc kubenswrapper[4520]: I0130 06:58:49.079369 4520 scope.go:117] "RemoveContainer" containerID="58cf71aa32d6cbabb0aa6aa905bf419cf955a28c872c313eb6b2e72f0d06cdce" Jan 30 06:58:49 crc kubenswrapper[4520]: I0130 06:58:49.102843 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57b8bf797-vl52c"] Jan 30 06:58:49 crc kubenswrapper[4520]: I0130 06:58:49.121414 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57b8bf797-vl52c"] Jan 30 06:58:49 crc kubenswrapper[4520]: I0130 06:58:49.129960 4520 scope.go:117] "RemoveContainer" containerID="ce7f58d635068e102d705394c67935d344f2cb3be01a28c9d034375fc83bfd46" Jan 30 06:58:49 crc kubenswrapper[4520]: I0130 06:58:49.132025 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f4cf86b6f-dd654"] Jan 30 06:58:49 crc kubenswrapper[4520]: E0130 06:58:49.134935 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce7f58d635068e102d705394c67935d344f2cb3be01a28c9d034375fc83bfd46\": container with ID starting with ce7f58d635068e102d705394c67935d344f2cb3be01a28c9d034375fc83bfd46 not found: ID does not exist" containerID="ce7f58d635068e102d705394c67935d344f2cb3be01a28c9d034375fc83bfd46" Jan 30 06:58:49 crc kubenswrapper[4520]: I0130 06:58:49.134987 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce7f58d635068e102d705394c67935d344f2cb3be01a28c9d034375fc83bfd46"} err="failed to get container status \"ce7f58d635068e102d705394c67935d344f2cb3be01a28c9d034375fc83bfd46\": rpc error: code = NotFound desc = could not find container \"ce7f58d635068e102d705394c67935d344f2cb3be01a28c9d034375fc83bfd46\": container with ID starting with ce7f58d635068e102d705394c67935d344f2cb3be01a28c9d034375fc83bfd46 not found: ID does not exist" Jan 30 06:58:49 crc kubenswrapper[4520]: I0130 06:58:49.135013 4520 scope.go:117] "RemoveContainer" containerID="58cf71aa32d6cbabb0aa6aa905bf419cf955a28c872c313eb6b2e72f0d06cdce" Jan 30 06:58:49 crc kubenswrapper[4520]: E0130 06:58:49.135487 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58cf71aa32d6cbabb0aa6aa905bf419cf955a28c872c313eb6b2e72f0d06cdce\": container with ID starting with 58cf71aa32d6cbabb0aa6aa905bf419cf955a28c872c313eb6b2e72f0d06cdce not found: ID does not exist" containerID="58cf71aa32d6cbabb0aa6aa905bf419cf955a28c872c313eb6b2e72f0d06cdce" Jan 30 06:58:49 crc kubenswrapper[4520]: I0130 06:58:49.135577 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58cf71aa32d6cbabb0aa6aa905bf419cf955a28c872c313eb6b2e72f0d06cdce"} err="failed to get container status \"58cf71aa32d6cbabb0aa6aa905bf419cf955a28c872c313eb6b2e72f0d06cdce\": rpc error: code = NotFound desc = could not find container \"58cf71aa32d6cbabb0aa6aa905bf419cf955a28c872c313eb6b2e72f0d06cdce\": container with ID starting with 58cf71aa32d6cbabb0aa6aa905bf419cf955a28c872c313eb6b2e72f0d06cdce not found: ID does not exist" Jan 30 06:58:49 crc kubenswrapper[4520]: I0130 06:58:49.142789 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f4cf86b6f-dd654"] Jan 30 06:58:49 crc kubenswrapper[4520]: I0130 06:58:49.721577 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1d0bd1d1-935d-458c-9cf8-c11455791a64-etc-swift\") pod \"swift-storage-0\" (UID: \"1d0bd1d1-935d-458c-9cf8-c11455791a64\") " pod="openstack/swift-storage-0" Jan 30 06:58:49 crc kubenswrapper[4520]: E0130 06:58:49.721762 4520 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 06:58:49 crc kubenswrapper[4520]: E0130 06:58:49.722118 4520 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 06:58:49 crc kubenswrapper[4520]: E0130 06:58:49.722216 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d0bd1d1-935d-458c-9cf8-c11455791a64-etc-swift podName:1d0bd1d1-935d-458c-9cf8-c11455791a64 nodeName:}" failed. No retries permitted until 2026-01-30 06:58:53.722190631 +0000 UTC m=+847.350542813 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1d0bd1d1-935d-458c-9cf8-c11455791a64-etc-swift") pod "swift-storage-0" (UID: "1d0bd1d1-935d-458c-9cf8-c11455791a64") : configmap "swift-ring-files" not found Jan 30 06:58:50 crc kubenswrapper[4520]: I0130 06:58:50.017010 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd8468b69-r99hr" event={"ID":"d4fdb7e3-5390-4912-8331-36f326f97d7c","Type":"ContainerStarted","Data":"269bfec4b7622aa7423843d986a593d6b111a79ffae4958811e2e3431f60f5bc"} Jan 30 06:58:50 crc kubenswrapper[4520]: I0130 06:58:50.017953 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd8468b69-r99hr" Jan 30 06:58:50 crc kubenswrapper[4520]: I0130 06:58:50.040117 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd8468b69-r99hr" podStartSLOduration=3.040098044 podStartE2EDuration="3.040098044s" podCreationTimestamp="2026-01-30 06:58:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:58:50.033501914 +0000 UTC m=+843.661854094" watchObservedRunningTime="2026-01-30 06:58:50.040098044 +0000 UTC m=+843.668450225" Jan 30 06:58:50 crc kubenswrapper[4520]: I0130 06:58:50.692554 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf090d59-44c4-4f21-a255-3eb4e3e6b64a" path="/var/lib/kubelet/pods/cf090d59-44c4-4f21-a255-3eb4e3e6b64a/volumes" Jan 30 06:58:50 crc kubenswrapper[4520]: I0130 06:58:50.693149 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7743ce8-ab4f-4639-96f8-12d4eef3a560" path="/var/lib/kubelet/pods/d7743ce8-ab4f-4639-96f8-12d4eef3a560/volumes" Jan 30 06:58:51 crc kubenswrapper[4520]: I0130 06:58:51.115933 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 30 06:58:51 crc kubenswrapper[4520]: I0130 06:58:51.116253 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 30 06:58:51 crc kubenswrapper[4520]: I0130 06:58:51.206263 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 30 06:58:51 crc kubenswrapper[4520]: I0130 06:58:51.311572 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-9hjf2"] Jan 30 06:58:51 crc kubenswrapper[4520]: E0130 06:58:51.312119 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7743ce8-ab4f-4639-96f8-12d4eef3a560" containerName="init" Jan 30 06:58:51 crc kubenswrapper[4520]: I0130 06:58:51.312140 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7743ce8-ab4f-4639-96f8-12d4eef3a560" containerName="init" Jan 30 06:58:51 crc kubenswrapper[4520]: E0130 06:58:51.312208 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf090d59-44c4-4f21-a255-3eb4e3e6b64a" containerName="init" Jan 30 06:58:51 crc kubenswrapper[4520]: I0130 06:58:51.312214 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf090d59-44c4-4f21-a255-3eb4e3e6b64a" containerName="init" Jan 30 06:58:51 crc kubenswrapper[4520]: E0130 06:58:51.312223 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf090d59-44c4-4f21-a255-3eb4e3e6b64a" containerName="dnsmasq-dns" Jan 30 06:58:51 crc kubenswrapper[4520]: I0130 06:58:51.312230 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf090d59-44c4-4f21-a255-3eb4e3e6b64a" containerName="dnsmasq-dns" Jan 30 06:58:51 crc kubenswrapper[4520]: I0130 06:58:51.312439 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7743ce8-ab4f-4639-96f8-12d4eef3a560" containerName="init" Jan 30 06:58:51 crc kubenswrapper[4520]: I0130 06:58:51.312453 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf090d59-44c4-4f21-a255-3eb4e3e6b64a" containerName="dnsmasq-dns" Jan 30 06:58:51 crc kubenswrapper[4520]: I0130 06:58:51.313278 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9hjf2" Jan 30 06:58:51 crc kubenswrapper[4520]: I0130 06:58:51.320378 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9hjf2"] Jan 30 06:58:51 crc kubenswrapper[4520]: I0130 06:58:51.325273 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 30 06:58:51 crc kubenswrapper[4520]: I0130 06:58:51.455163 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/153de015-b81a-4456-98af-6d7de7c63c41-operator-scripts\") pod \"root-account-create-update-9hjf2\" (UID: \"153de015-b81a-4456-98af-6d7de7c63c41\") " pod="openstack/root-account-create-update-9hjf2" Jan 30 06:58:51 crc kubenswrapper[4520]: I0130 06:58:51.455270 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4st8d\" (UniqueName: \"kubernetes.io/projected/153de015-b81a-4456-98af-6d7de7c63c41-kube-api-access-4st8d\") pod \"root-account-create-update-9hjf2\" (UID: \"153de015-b81a-4456-98af-6d7de7c63c41\") " pod="openstack/root-account-create-update-9hjf2" Jan 30 06:58:51 crc kubenswrapper[4520]: I0130 06:58:51.557939 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4st8d\" (UniqueName: \"kubernetes.io/projected/153de015-b81a-4456-98af-6d7de7c63c41-kube-api-access-4st8d\") pod \"root-account-create-update-9hjf2\" (UID: \"153de015-b81a-4456-98af-6d7de7c63c41\") " pod="openstack/root-account-create-update-9hjf2" Jan 30 06:58:51 crc kubenswrapper[4520]: I0130 06:58:51.558265 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/153de015-b81a-4456-98af-6d7de7c63c41-operator-scripts\") pod \"root-account-create-update-9hjf2\" (UID: \"153de015-b81a-4456-98af-6d7de7c63c41\") " pod="openstack/root-account-create-update-9hjf2" Jan 30 06:58:51 crc kubenswrapper[4520]: I0130 06:58:51.559033 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/153de015-b81a-4456-98af-6d7de7c63c41-operator-scripts\") pod \"root-account-create-update-9hjf2\" (UID: \"153de015-b81a-4456-98af-6d7de7c63c41\") " pod="openstack/root-account-create-update-9hjf2" Jan 30 06:58:51 crc kubenswrapper[4520]: I0130 06:58:51.584868 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4st8d\" (UniqueName: \"kubernetes.io/projected/153de015-b81a-4456-98af-6d7de7c63c41-kube-api-access-4st8d\") pod \"root-account-create-update-9hjf2\" (UID: \"153de015-b81a-4456-98af-6d7de7c63c41\") " pod="openstack/root-account-create-update-9hjf2" Jan 30 06:58:51 crc kubenswrapper[4520]: I0130 06:58:51.644914 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9hjf2" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.123004 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.287363 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-bh5dv"] Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.288682 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bh5dv" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.309832 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-bh5dv"] Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.381999 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knwbb\" (UniqueName: \"kubernetes.io/projected/87fa59f0-b4fd-472f-a612-b79fc97fec36-kube-api-access-knwbb\") pod \"keystone-db-create-bh5dv\" (UID: \"87fa59f0-b4fd-472f-a612-b79fc97fec36\") " pod="openstack/keystone-db-create-bh5dv" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.382055 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87fa59f0-b4fd-472f-a612-b79fc97fec36-operator-scripts\") pod \"keystone-db-create-bh5dv\" (UID: \"87fa59f0-b4fd-472f-a612-b79fc97fec36\") " pod="openstack/keystone-db-create-bh5dv" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.383222 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-2f92-account-create-update-4mpvt"] Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.384765 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2f92-account-create-update-4mpvt" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.396362 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.399793 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2f92-account-create-update-4mpvt"] Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.483791 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8bbk\" (UniqueName: \"kubernetes.io/projected/9e19c8d9-7b99-4827-9b6b-c786a3600c46-kube-api-access-j8bbk\") pod \"keystone-2f92-account-create-update-4mpvt\" (UID: \"9e19c8d9-7b99-4827-9b6b-c786a3600c46\") " pod="openstack/keystone-2f92-account-create-update-4mpvt" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.483902 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knwbb\" (UniqueName: \"kubernetes.io/projected/87fa59f0-b4fd-472f-a612-b79fc97fec36-kube-api-access-knwbb\") pod \"keystone-db-create-bh5dv\" (UID: \"87fa59f0-b4fd-472f-a612-b79fc97fec36\") " pod="openstack/keystone-db-create-bh5dv" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.483934 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87fa59f0-b4fd-472f-a612-b79fc97fec36-operator-scripts\") pod \"keystone-db-create-bh5dv\" (UID: \"87fa59f0-b4fd-472f-a612-b79fc97fec36\") " pod="openstack/keystone-db-create-bh5dv" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.484008 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e19c8d9-7b99-4827-9b6b-c786a3600c46-operator-scripts\") pod \"keystone-2f92-account-create-update-4mpvt\" (UID: \"9e19c8d9-7b99-4827-9b6b-c786a3600c46\") " pod="openstack/keystone-2f92-account-create-update-4mpvt" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.484831 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87fa59f0-b4fd-472f-a612-b79fc97fec36-operator-scripts\") pod \"keystone-db-create-bh5dv\" (UID: \"87fa59f0-b4fd-472f-a612-b79fc97fec36\") " pod="openstack/keystone-db-create-bh5dv" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.506934 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knwbb\" (UniqueName: \"kubernetes.io/projected/87fa59f0-b4fd-472f-a612-b79fc97fec36-kube-api-access-knwbb\") pod \"keystone-db-create-bh5dv\" (UID: \"87fa59f0-b4fd-472f-a612-b79fc97fec36\") " pod="openstack/keystone-db-create-bh5dv" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.573673 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-744gm"] Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.574631 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-744gm" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.584851 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-744gm"] Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.585751 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e19c8d9-7b99-4827-9b6b-c786a3600c46-operator-scripts\") pod \"keystone-2f92-account-create-update-4mpvt\" (UID: \"9e19c8d9-7b99-4827-9b6b-c786a3600c46\") " pod="openstack/keystone-2f92-account-create-update-4mpvt" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.585921 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8bbk\" (UniqueName: \"kubernetes.io/projected/9e19c8d9-7b99-4827-9b6b-c786a3600c46-kube-api-access-j8bbk\") pod \"keystone-2f92-account-create-update-4mpvt\" (UID: \"9e19c8d9-7b99-4827-9b6b-c786a3600c46\") " pod="openstack/keystone-2f92-account-create-update-4mpvt" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.586383 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e19c8d9-7b99-4827-9b6b-c786a3600c46-operator-scripts\") pod \"keystone-2f92-account-create-update-4mpvt\" (UID: \"9e19c8d9-7b99-4827-9b6b-c786a3600c46\") " pod="openstack/keystone-2f92-account-create-update-4mpvt" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.603062 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8bbk\" (UniqueName: \"kubernetes.io/projected/9e19c8d9-7b99-4827-9b6b-c786a3600c46-kube-api-access-j8bbk\") pod \"keystone-2f92-account-create-update-4mpvt\" (UID: \"9e19c8d9-7b99-4827-9b6b-c786a3600c46\") " pod="openstack/keystone-2f92-account-create-update-4mpvt" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.624126 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bh5dv" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.691696 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c74d5704-4893-46ce-912f-20805c99608c-operator-scripts\") pod \"placement-db-create-744gm\" (UID: \"c74d5704-4893-46ce-912f-20805c99608c\") " pod="openstack/placement-db-create-744gm" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.691840 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkwr9\" (UniqueName: \"kubernetes.io/projected/c74d5704-4893-46ce-912f-20805c99608c-kube-api-access-bkwr9\") pod \"placement-db-create-744gm\" (UID: \"c74d5704-4893-46ce-912f-20805c99608c\") " pod="openstack/placement-db-create-744gm" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.712028 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2f92-account-create-update-4mpvt" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.715978 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-4575-account-create-update-pxcsl"] Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.725720 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4575-account-create-update-pxcsl" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.737130 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.753868 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4575-account-create-update-pxcsl"] Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.794182 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn8dt\" (UniqueName: \"kubernetes.io/projected/66e50498-b61c-48eb-bd9b-002ad02fa6a0-kube-api-access-rn8dt\") pod \"placement-4575-account-create-update-pxcsl\" (UID: \"66e50498-b61c-48eb-bd9b-002ad02fa6a0\") " pod="openstack/placement-4575-account-create-update-pxcsl" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.794320 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c74d5704-4893-46ce-912f-20805c99608c-operator-scripts\") pod \"placement-db-create-744gm\" (UID: \"c74d5704-4893-46ce-912f-20805c99608c\") " pod="openstack/placement-db-create-744gm" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.794437 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66e50498-b61c-48eb-bd9b-002ad02fa6a0-operator-scripts\") pod \"placement-4575-account-create-update-pxcsl\" (UID: \"66e50498-b61c-48eb-bd9b-002ad02fa6a0\") " pod="openstack/placement-4575-account-create-update-pxcsl" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.794626 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkwr9\" (UniqueName: \"kubernetes.io/projected/c74d5704-4893-46ce-912f-20805c99608c-kube-api-access-bkwr9\") pod \"placement-db-create-744gm\" (UID: \"c74d5704-4893-46ce-912f-20805c99608c\") " pod="openstack/placement-db-create-744gm" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.796071 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c74d5704-4893-46ce-912f-20805c99608c-operator-scripts\") pod \"placement-db-create-744gm\" (UID: \"c74d5704-4893-46ce-912f-20805c99608c\") " pod="openstack/placement-db-create-744gm" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.813638 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkwr9\" (UniqueName: \"kubernetes.io/projected/c74d5704-4893-46ce-912f-20805c99608c-kube-api-access-bkwr9\") pod \"placement-db-create-744gm\" (UID: \"c74d5704-4893-46ce-912f-20805c99608c\") " pod="openstack/placement-db-create-744gm" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.838545 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-qwxpw"] Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.849004 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-qwxpw" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.863768 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-qwxpw"] Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.895065 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-744gm" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.896296 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66e50498-b61c-48eb-bd9b-002ad02fa6a0-operator-scripts\") pod \"placement-4575-account-create-update-pxcsl\" (UID: \"66e50498-b61c-48eb-bd9b-002ad02fa6a0\") " pod="openstack/placement-4575-account-create-update-pxcsl" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.896427 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn8dt\" (UniqueName: \"kubernetes.io/projected/66e50498-b61c-48eb-bd9b-002ad02fa6a0-kube-api-access-rn8dt\") pod \"placement-4575-account-create-update-pxcsl\" (UID: \"66e50498-b61c-48eb-bd9b-002ad02fa6a0\") " pod="openstack/placement-4575-account-create-update-pxcsl" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.897120 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66e50498-b61c-48eb-bd9b-002ad02fa6a0-operator-scripts\") pod \"placement-4575-account-create-update-pxcsl\" (UID: \"66e50498-b61c-48eb-bd9b-002ad02fa6a0\") " pod="openstack/placement-4575-account-create-update-pxcsl" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.919028 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-38e4-account-create-update-vrcjp"] Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.920414 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-38e4-account-create-update-vrcjp" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.930074 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-38e4-account-create-update-vrcjp"] Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.933863 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn8dt\" (UniqueName: \"kubernetes.io/projected/66e50498-b61c-48eb-bd9b-002ad02fa6a0-kube-api-access-rn8dt\") pod \"placement-4575-account-create-update-pxcsl\" (UID: \"66e50498-b61c-48eb-bd9b-002ad02fa6a0\") " pod="openstack/placement-4575-account-create-update-pxcsl" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.939051 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.998001 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmf8p\" (UniqueName: \"kubernetes.io/projected/96657e38-5386-49e4-9ea3-b12a72c31fdf-kube-api-access-bmf8p\") pod \"glance-38e4-account-create-update-vrcjp\" (UID: \"96657e38-5386-49e4-9ea3-b12a72c31fdf\") " pod="openstack/glance-38e4-account-create-update-vrcjp" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.998095 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96657e38-5386-49e4-9ea3-b12a72c31fdf-operator-scripts\") pod \"glance-38e4-account-create-update-vrcjp\" (UID: \"96657e38-5386-49e4-9ea3-b12a72c31fdf\") " pod="openstack/glance-38e4-account-create-update-vrcjp" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.998147 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lscvb\" (UniqueName: \"kubernetes.io/projected/638e5bb8-4a2a-42a5-ab4c-fd75e93b8efa-kube-api-access-lscvb\") pod \"glance-db-create-qwxpw\" (UID: \"638e5bb8-4a2a-42a5-ab4c-fd75e93b8efa\") " pod="openstack/glance-db-create-qwxpw" Jan 30 06:58:52 crc kubenswrapper[4520]: I0130 06:58:52.998181 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/638e5bb8-4a2a-42a5-ab4c-fd75e93b8efa-operator-scripts\") pod \"glance-db-create-qwxpw\" (UID: \"638e5bb8-4a2a-42a5-ab4c-fd75e93b8efa\") " pod="openstack/glance-db-create-qwxpw" Jan 30 06:58:53 crc kubenswrapper[4520]: I0130 06:58:53.053509 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4575-account-create-update-pxcsl" Jan 30 06:58:53 crc kubenswrapper[4520]: I0130 06:58:53.100574 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lscvb\" (UniqueName: \"kubernetes.io/projected/638e5bb8-4a2a-42a5-ab4c-fd75e93b8efa-kube-api-access-lscvb\") pod \"glance-db-create-qwxpw\" (UID: \"638e5bb8-4a2a-42a5-ab4c-fd75e93b8efa\") " pod="openstack/glance-db-create-qwxpw" Jan 30 06:58:53 crc kubenswrapper[4520]: I0130 06:58:53.100630 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/638e5bb8-4a2a-42a5-ab4c-fd75e93b8efa-operator-scripts\") pod \"glance-db-create-qwxpw\" (UID: \"638e5bb8-4a2a-42a5-ab4c-fd75e93b8efa\") " pod="openstack/glance-db-create-qwxpw" Jan 30 06:58:53 crc kubenswrapper[4520]: I0130 06:58:53.100766 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmf8p\" (UniqueName: \"kubernetes.io/projected/96657e38-5386-49e4-9ea3-b12a72c31fdf-kube-api-access-bmf8p\") pod \"glance-38e4-account-create-update-vrcjp\" (UID: \"96657e38-5386-49e4-9ea3-b12a72c31fdf\") " pod="openstack/glance-38e4-account-create-update-vrcjp" Jan 30 06:58:53 crc kubenswrapper[4520]: I0130 06:58:53.100812 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96657e38-5386-49e4-9ea3-b12a72c31fdf-operator-scripts\") pod \"glance-38e4-account-create-update-vrcjp\" (UID: \"96657e38-5386-49e4-9ea3-b12a72c31fdf\") " pod="openstack/glance-38e4-account-create-update-vrcjp" Jan 30 06:58:53 crc kubenswrapper[4520]: I0130 06:58:53.101499 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96657e38-5386-49e4-9ea3-b12a72c31fdf-operator-scripts\") pod \"glance-38e4-account-create-update-vrcjp\" (UID: \"96657e38-5386-49e4-9ea3-b12a72c31fdf\") " pod="openstack/glance-38e4-account-create-update-vrcjp" Jan 30 06:58:53 crc kubenswrapper[4520]: I0130 06:58:53.102512 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/638e5bb8-4a2a-42a5-ab4c-fd75e93b8efa-operator-scripts\") pod \"glance-db-create-qwxpw\" (UID: \"638e5bb8-4a2a-42a5-ab4c-fd75e93b8efa\") " pod="openstack/glance-db-create-qwxpw" Jan 30 06:58:53 crc kubenswrapper[4520]: I0130 06:58:53.119857 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmf8p\" (UniqueName: \"kubernetes.io/projected/96657e38-5386-49e4-9ea3-b12a72c31fdf-kube-api-access-bmf8p\") pod \"glance-38e4-account-create-update-vrcjp\" (UID: \"96657e38-5386-49e4-9ea3-b12a72c31fdf\") " pod="openstack/glance-38e4-account-create-update-vrcjp" Jan 30 06:58:53 crc kubenswrapper[4520]: I0130 06:58:53.134297 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lscvb\" (UniqueName: \"kubernetes.io/projected/638e5bb8-4a2a-42a5-ab4c-fd75e93b8efa-kube-api-access-lscvb\") pod \"glance-db-create-qwxpw\" (UID: \"638e5bb8-4a2a-42a5-ab4c-fd75e93b8efa\") " pod="openstack/glance-db-create-qwxpw" Jan 30 06:58:53 crc kubenswrapper[4520]: I0130 06:58:53.165285 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-qwxpw" Jan 30 06:58:53 crc kubenswrapper[4520]: I0130 06:58:53.274405 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-38e4-account-create-update-vrcjp" Jan 30 06:58:53 crc kubenswrapper[4520]: I0130 06:58:53.818968 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1d0bd1d1-935d-458c-9cf8-c11455791a64-etc-swift\") pod \"swift-storage-0\" (UID: \"1d0bd1d1-935d-458c-9cf8-c11455791a64\") " pod="openstack/swift-storage-0" Jan 30 06:58:53 crc kubenswrapper[4520]: E0130 06:58:53.819157 4520 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 06:58:53 crc kubenswrapper[4520]: E0130 06:58:53.819186 4520 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 06:58:53 crc kubenswrapper[4520]: E0130 06:58:53.819250 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d0bd1d1-935d-458c-9cf8-c11455791a64-etc-swift podName:1d0bd1d1-935d-458c-9cf8-c11455791a64 nodeName:}" failed. No retries permitted until 2026-01-30 06:59:01.819227795 +0000 UTC m=+855.447579976 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1d0bd1d1-935d-458c-9cf8-c11455791a64-etc-swift") pod "swift-storage-0" (UID: "1d0bd1d1-935d-458c-9cf8-c11455791a64") : configmap "swift-ring-files" not found Jan 30 06:58:55 crc kubenswrapper[4520]: I0130 06:58:55.816457 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-744gm"] Jan 30 06:58:55 crc kubenswrapper[4520]: I0130 06:58:55.850182 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-38e4-account-create-update-vrcjp"] Jan 30 06:58:55 crc kubenswrapper[4520]: W0130 06:58:55.855966 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96657e38_5386_49e4_9ea3_b12a72c31fdf.slice/crio-06c46b88d55d66c770a27b24a45c8d7330dfc1feadc01dd5b88244430a0cb1d3 WatchSource:0}: Error finding container 06c46b88d55d66c770a27b24a45c8d7330dfc1feadc01dd5b88244430a0cb1d3: Status 404 returned error can't find the container with id 06c46b88d55d66c770a27b24a45c8d7330dfc1feadc01dd5b88244430a0cb1d3 Jan 30 06:58:55 crc kubenswrapper[4520]: I0130 06:58:55.933533 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4575-account-create-update-pxcsl"] Jan 30 06:58:55 crc kubenswrapper[4520]: I0130 06:58:55.954936 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9hjf2"] Jan 30 06:58:55 crc kubenswrapper[4520]: I0130 06:58:55.960110 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-qwxpw"] Jan 30 06:58:55 crc kubenswrapper[4520]: W0130 06:58:55.960601 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod638e5bb8_4a2a_42a5_ab4c_fd75e93b8efa.slice/crio-1de0bb5418cd38ab7816016957540dea1c8f7c1926829c2dea95edc3c6519ac3 WatchSource:0}: Error finding container 1de0bb5418cd38ab7816016957540dea1c8f7c1926829c2dea95edc3c6519ac3: Status 404 returned error can't find the container with id 1de0bb5418cd38ab7816016957540dea1c8f7c1926829c2dea95edc3c6519ac3 Jan 30 06:58:56 crc kubenswrapper[4520]: I0130 06:58:56.077253 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-97w8x" event={"ID":"49410419-7629-431c-9f17-b66263889ede","Type":"ContainerStarted","Data":"4f6b8ba2cada174ccd7cf980796362004f38d492af1605abd0c7d9b05f55c460"} Jan 30 06:58:56 crc kubenswrapper[4520]: I0130 06:58:56.079396 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d0e0c178-3ca9-4112-a7eb-d013ed5107a2","Type":"ContainerStarted","Data":"8f01e18007aae4b15a6601c1543fcda9765a965615c8632fc280d299ea317975"} Jan 30 06:58:56 crc kubenswrapper[4520]: I0130 06:58:56.079448 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d0e0c178-3ca9-4112-a7eb-d013ed5107a2","Type":"ContainerStarted","Data":"415cd134687d4806ca808d46bf897d0650c0641ed4497659a046ef951b186f69"} Jan 30 06:58:56 crc kubenswrapper[4520]: I0130 06:58:56.080231 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 30 06:58:56 crc kubenswrapper[4520]: I0130 06:58:56.083257 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-qwxpw" event={"ID":"638e5bb8-4a2a-42a5-ab4c-fd75e93b8efa","Type":"ContainerStarted","Data":"1de0bb5418cd38ab7816016957540dea1c8f7c1926829c2dea95edc3c6519ac3"} Jan 30 06:58:56 crc kubenswrapper[4520]: I0130 06:58:56.084263 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4575-account-create-update-pxcsl" event={"ID":"66e50498-b61c-48eb-bd9b-002ad02fa6a0","Type":"ContainerStarted","Data":"3fb98a00e6b6b19af2e3b8e98d89afc551d925cd26e504b3139f08ec74fe1d55"} Jan 30 06:58:56 crc kubenswrapper[4520]: I0130 06:58:56.086280 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-744gm" event={"ID":"c74d5704-4893-46ce-912f-20805c99608c","Type":"ContainerStarted","Data":"289ebc1ab0b8a8c94ac6c572c1802c8b9640d3a498143f42e8c79ed0b23ae451"} Jan 30 06:58:56 crc kubenswrapper[4520]: I0130 06:58:56.086323 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-744gm" event={"ID":"c74d5704-4893-46ce-912f-20805c99608c","Type":"ContainerStarted","Data":"da4b3c11b8262726d6d0a0d1ec7113970811b46759dc4194baf6f5b9b90fd822"} Jan 30 06:58:56 crc kubenswrapper[4520]: I0130 06:58:56.101618 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2f92-account-create-update-4mpvt"] Jan 30 06:58:56 crc kubenswrapper[4520]: I0130 06:58:56.112145 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9hjf2" event={"ID":"153de015-b81a-4456-98af-6d7de7c63c41","Type":"ContainerStarted","Data":"7bcad060ea3ee99de552ab4ea6a8d713acb2d23fbb6f5fcd12e97ff1d3172ef1"} Jan 30 06:58:56 crc kubenswrapper[4520]: I0130 06:58:56.117375 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-38e4-account-create-update-vrcjp" event={"ID":"96657e38-5386-49e4-9ea3-b12a72c31fdf","Type":"ContainerStarted","Data":"6f36f78133aa3919d8172980469d2aaaa9ff7cdb5084aa74fbb93265684545b3"} Jan 30 06:58:56 crc kubenswrapper[4520]: I0130 06:58:56.117418 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-38e4-account-create-update-vrcjp" event={"ID":"96657e38-5386-49e4-9ea3-b12a72c31fdf","Type":"ContainerStarted","Data":"06c46b88d55d66c770a27b24a45c8d7330dfc1feadc01dd5b88244430a0cb1d3"} Jan 30 06:58:56 crc kubenswrapper[4520]: I0130 06:58:56.132722 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-97w8x" podStartSLOduration=1.594909977 podStartE2EDuration="10.132710509s" podCreationTimestamp="2026-01-30 06:58:46 +0000 UTC" firstStartedPulling="2026-01-30 06:58:46.860136342 +0000 UTC m=+840.488488524" lastFinishedPulling="2026-01-30 06:58:55.397936876 +0000 UTC m=+849.026289056" observedRunningTime="2026-01-30 06:58:56.115152589 +0000 UTC m=+849.743504770" watchObservedRunningTime="2026-01-30 06:58:56.132710509 +0000 UTC m=+849.761062690" Jan 30 06:58:56 crc kubenswrapper[4520]: I0130 06:58:56.157025 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-bh5dv"] Jan 30 06:58:56 crc kubenswrapper[4520]: I0130 06:58:56.171429 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-744gm" podStartSLOduration=4.171410058 podStartE2EDuration="4.171410058s" podCreationTimestamp="2026-01-30 06:58:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:58:56.130646188 +0000 UTC m=+849.758998370" watchObservedRunningTime="2026-01-30 06:58:56.171410058 +0000 UTC m=+849.799762238" Jan 30 06:58:56 crc kubenswrapper[4520]: I0130 06:58:56.175143 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.763923733 podStartE2EDuration="9.175136693s" podCreationTimestamp="2026-01-30 06:58:47 +0000 UTC" firstStartedPulling="2026-01-30 06:58:48.926124445 +0000 UTC m=+842.554476625" lastFinishedPulling="2026-01-30 06:58:55.337337404 +0000 UTC m=+848.965689585" observedRunningTime="2026-01-30 06:58:56.146921988 +0000 UTC m=+849.775274169" watchObservedRunningTime="2026-01-30 06:58:56.175136693 +0000 UTC m=+849.803488874" Jan 30 06:58:56 crc kubenswrapper[4520]: I0130 06:58:56.187013 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-38e4-account-create-update-vrcjp" podStartSLOduration=4.187004506 podStartE2EDuration="4.187004506s" podCreationTimestamp="2026-01-30 06:58:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:58:56.16057681 +0000 UTC m=+849.788928981" watchObservedRunningTime="2026-01-30 06:58:56.187004506 +0000 UTC m=+849.815356688" Jan 30 06:58:57 crc kubenswrapper[4520]: I0130 06:58:57.129384 4520 generic.go:334] "Generic (PLEG): container finished" podID="96657e38-5386-49e4-9ea3-b12a72c31fdf" containerID="6f36f78133aa3919d8172980469d2aaaa9ff7cdb5084aa74fbb93265684545b3" exitCode=0 Jan 30 06:58:57 crc kubenswrapper[4520]: I0130 06:58:57.129476 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-38e4-account-create-update-vrcjp" event={"ID":"96657e38-5386-49e4-9ea3-b12a72c31fdf","Type":"ContainerDied","Data":"6f36f78133aa3919d8172980469d2aaaa9ff7cdb5084aa74fbb93265684545b3"} Jan 30 06:58:57 crc kubenswrapper[4520]: I0130 06:58:57.132091 4520 generic.go:334] "Generic (PLEG): container finished" podID="638e5bb8-4a2a-42a5-ab4c-fd75e93b8efa" containerID="81ca8543fbaf000cc130b3729de0fed46bc96673b7d5480676bf7004508096fe" exitCode=0 Jan 30 06:58:57 crc kubenswrapper[4520]: I0130 06:58:57.132165 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-qwxpw" event={"ID":"638e5bb8-4a2a-42a5-ab4c-fd75e93b8efa","Type":"ContainerDied","Data":"81ca8543fbaf000cc130b3729de0fed46bc96673b7d5480676bf7004508096fe"} Jan 30 06:58:57 crc kubenswrapper[4520]: I0130 06:58:57.134185 4520 generic.go:334] "Generic (PLEG): container finished" podID="66e50498-b61c-48eb-bd9b-002ad02fa6a0" containerID="6fd5e39d8888aa8e32e72b902823ecfec2d8a4862fdfb54ce350acbed6441994" exitCode=0 Jan 30 06:58:57 crc kubenswrapper[4520]: I0130 06:58:57.134305 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4575-account-create-update-pxcsl" event={"ID":"66e50498-b61c-48eb-bd9b-002ad02fa6a0","Type":"ContainerDied","Data":"6fd5e39d8888aa8e32e72b902823ecfec2d8a4862fdfb54ce350acbed6441994"} Jan 30 06:58:57 crc kubenswrapper[4520]: I0130 06:58:57.135695 4520 generic.go:334] "Generic (PLEG): container finished" podID="c74d5704-4893-46ce-912f-20805c99608c" containerID="289ebc1ab0b8a8c94ac6c572c1802c8b9640d3a498143f42e8c79ed0b23ae451" exitCode=0 Jan 30 06:58:57 crc kubenswrapper[4520]: I0130 06:58:57.135756 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-744gm" event={"ID":"c74d5704-4893-46ce-912f-20805c99608c","Type":"ContainerDied","Data":"289ebc1ab0b8a8c94ac6c572c1802c8b9640d3a498143f42e8c79ed0b23ae451"} Jan 30 06:58:57 crc kubenswrapper[4520]: I0130 06:58:57.136904 4520 generic.go:334] "Generic (PLEG): container finished" podID="153de015-b81a-4456-98af-6d7de7c63c41" containerID="65a23ad6f1cdb4d68c5c822bfed69a5e5a265bf7c3e949038e74887365524361" exitCode=0 Jan 30 06:58:57 crc kubenswrapper[4520]: I0130 06:58:57.136973 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9hjf2" event={"ID":"153de015-b81a-4456-98af-6d7de7c63c41","Type":"ContainerDied","Data":"65a23ad6f1cdb4d68c5c822bfed69a5e5a265bf7c3e949038e74887365524361"} Jan 30 06:58:57 crc kubenswrapper[4520]: I0130 06:58:57.138042 4520 generic.go:334] "Generic (PLEG): container finished" podID="9e19c8d9-7b99-4827-9b6b-c786a3600c46" containerID="379719d2c1d5df021ee12c4afebd31b8461549ce4158e7cc4df54df507a95489" exitCode=0 Jan 30 06:58:57 crc kubenswrapper[4520]: I0130 06:58:57.138141 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2f92-account-create-update-4mpvt" event={"ID":"9e19c8d9-7b99-4827-9b6b-c786a3600c46","Type":"ContainerDied","Data":"379719d2c1d5df021ee12c4afebd31b8461549ce4158e7cc4df54df507a95489"} Jan 30 06:58:57 crc kubenswrapper[4520]: I0130 06:58:57.138219 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2f92-account-create-update-4mpvt" event={"ID":"9e19c8d9-7b99-4827-9b6b-c786a3600c46","Type":"ContainerStarted","Data":"fd3b85054365b2213f5a18a065dffe99cc71bde4d0f12a6b1bef8b19a1dd705c"} Jan 30 06:58:57 crc kubenswrapper[4520]: I0130 06:58:57.139877 4520 generic.go:334] "Generic (PLEG): container finished" podID="87fa59f0-b4fd-472f-a612-b79fc97fec36" containerID="b06002c5d1465d1a40ff40fa02c602b02c1221af7c39b605a28729504bfb6dbd" exitCode=0 Jan 30 06:58:57 crc kubenswrapper[4520]: I0130 06:58:57.140995 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bh5dv" event={"ID":"87fa59f0-b4fd-472f-a612-b79fc97fec36","Type":"ContainerDied","Data":"b06002c5d1465d1a40ff40fa02c602b02c1221af7c39b605a28729504bfb6dbd"} Jan 30 06:58:57 crc kubenswrapper[4520]: I0130 06:58:57.141377 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bh5dv" event={"ID":"87fa59f0-b4fd-472f-a612-b79fc97fec36","Type":"ContainerStarted","Data":"1667de82c1e4d3fc0ef1f3871b832d48cb1996683f5c947f6747b71d0b504c37"} Jan 30 06:58:57 crc kubenswrapper[4520]: I0130 06:58:57.504032 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd8468b69-r99hr" Jan 30 06:58:57 crc kubenswrapper[4520]: I0130 06:58:57.562487 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f744cb77-wjhmz"] Jan 30 06:58:57 crc kubenswrapper[4520]: I0130 06:58:57.562779 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f744cb77-wjhmz" podUID="d8afe178-46ca-433c-8ce0-b0ab1fb61ffb" containerName="dnsmasq-dns" containerID="cri-o://5b5e46f8e765b0d0298793cb65b8c7a0f0aa626987ac8df52aaf177820bde4ca" gracePeriod=10 Jan 30 06:58:57 crc kubenswrapper[4520]: I0130 06:58:57.794080 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:58:57 crc kubenswrapper[4520]: I0130 06:58:57.794403 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:58:57 crc kubenswrapper[4520]: I0130 06:58:57.794474 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" Jan 30 06:58:57 crc kubenswrapper[4520]: I0130 06:58:57.795417 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"23b7c2584fae4db0c5cd58feba27cd2cddcee2416ca541fef55d331d3df60688"} pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 06:58:57 crc kubenswrapper[4520]: I0130 06:58:57.795487 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" containerID="cri-o://23b7c2584fae4db0c5cd58feba27cd2cddcee2416ca541fef55d331d3df60688" gracePeriod=600 Jan 30 06:58:57 crc kubenswrapper[4520]: I0130 06:58:57.988830 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f744cb77-wjhmz" Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.131705 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8afe178-46ca-433c-8ce0-b0ab1fb61ffb-dns-svc\") pod \"d8afe178-46ca-433c-8ce0-b0ab1fb61ffb\" (UID: \"d8afe178-46ca-433c-8ce0-b0ab1fb61ffb\") " Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.131784 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8afe178-46ca-433c-8ce0-b0ab1fb61ffb-config\") pod \"d8afe178-46ca-433c-8ce0-b0ab1fb61ffb\" (UID: \"d8afe178-46ca-433c-8ce0-b0ab1fb61ffb\") " Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.131932 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdtt4\" (UniqueName: \"kubernetes.io/projected/d8afe178-46ca-433c-8ce0-b0ab1fb61ffb-kube-api-access-qdtt4\") pod \"d8afe178-46ca-433c-8ce0-b0ab1fb61ffb\" (UID: \"d8afe178-46ca-433c-8ce0-b0ab1fb61ffb\") " Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.142704 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8afe178-46ca-433c-8ce0-b0ab1fb61ffb-kube-api-access-qdtt4" (OuterVolumeSpecName: "kube-api-access-qdtt4") pod "d8afe178-46ca-433c-8ce0-b0ab1fb61ffb" (UID: "d8afe178-46ca-433c-8ce0-b0ab1fb61ffb"). InnerVolumeSpecName "kube-api-access-qdtt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.153832 4520 generic.go:334] "Generic (PLEG): container finished" podID="d8afe178-46ca-433c-8ce0-b0ab1fb61ffb" containerID="5b5e46f8e765b0d0298793cb65b8c7a0f0aa626987ac8df52aaf177820bde4ca" exitCode=0 Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.153899 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f744cb77-wjhmz" Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.153924 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f744cb77-wjhmz" event={"ID":"d8afe178-46ca-433c-8ce0-b0ab1fb61ffb","Type":"ContainerDied","Data":"5b5e46f8e765b0d0298793cb65b8c7a0f0aa626987ac8df52aaf177820bde4ca"} Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.154147 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f744cb77-wjhmz" event={"ID":"d8afe178-46ca-433c-8ce0-b0ab1fb61ffb","Type":"ContainerDied","Data":"54e89ed464c549de060313b46986aff91d4aecfe06e379e240ca22620f24aea1"} Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.154172 4520 scope.go:117] "RemoveContainer" containerID="5b5e46f8e765b0d0298793cb65b8c7a0f0aa626987ac8df52aaf177820bde4ca" Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.159907 4520 generic.go:334] "Generic (PLEG): container finished" podID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerID="23b7c2584fae4db0c5cd58feba27cd2cddcee2416ca541fef55d331d3df60688" exitCode=0 Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.160073 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerDied","Data":"23b7c2584fae4db0c5cd58feba27cd2cddcee2416ca541fef55d331d3df60688"} Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.195617 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8afe178-46ca-433c-8ce0-b0ab1fb61ffb-config" (OuterVolumeSpecName: "config") pod "d8afe178-46ca-433c-8ce0-b0ab1fb61ffb" (UID: "d8afe178-46ca-433c-8ce0-b0ab1fb61ffb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.209347 4520 scope.go:117] "RemoveContainer" containerID="054e95b024f12293d22ef137d25c19135953766ffbf2a566e1ad91438676f6a0" Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.210114 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8afe178-46ca-433c-8ce0-b0ab1fb61ffb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d8afe178-46ca-433c-8ce0-b0ab1fb61ffb" (UID: "d8afe178-46ca-433c-8ce0-b0ab1fb61ffb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.236370 4520 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8afe178-46ca-433c-8ce0-b0ab1fb61ffb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.236489 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8afe178-46ca-433c-8ce0-b0ab1fb61ffb-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.236573 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdtt4\" (UniqueName: \"kubernetes.io/projected/d8afe178-46ca-433c-8ce0-b0ab1fb61ffb-kube-api-access-qdtt4\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.287984 4520 scope.go:117] "RemoveContainer" containerID="5b5e46f8e765b0d0298793cb65b8c7a0f0aa626987ac8df52aaf177820bde4ca" Jan 30 06:58:58 crc kubenswrapper[4520]: E0130 06:58:58.289827 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b5e46f8e765b0d0298793cb65b8c7a0f0aa626987ac8df52aaf177820bde4ca\": container with ID starting with 5b5e46f8e765b0d0298793cb65b8c7a0f0aa626987ac8df52aaf177820bde4ca not found: ID does not exist" containerID="5b5e46f8e765b0d0298793cb65b8c7a0f0aa626987ac8df52aaf177820bde4ca" Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.290744 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b5e46f8e765b0d0298793cb65b8c7a0f0aa626987ac8df52aaf177820bde4ca"} err="failed to get container status \"5b5e46f8e765b0d0298793cb65b8c7a0f0aa626987ac8df52aaf177820bde4ca\": rpc error: code = NotFound desc = could not find container \"5b5e46f8e765b0d0298793cb65b8c7a0f0aa626987ac8df52aaf177820bde4ca\": container with ID starting with 5b5e46f8e765b0d0298793cb65b8c7a0f0aa626987ac8df52aaf177820bde4ca not found: ID does not exist" Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.290869 4520 scope.go:117] "RemoveContainer" containerID="054e95b024f12293d22ef137d25c19135953766ffbf2a566e1ad91438676f6a0" Jan 30 06:58:58 crc kubenswrapper[4520]: E0130 06:58:58.293224 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"054e95b024f12293d22ef137d25c19135953766ffbf2a566e1ad91438676f6a0\": container with ID starting with 054e95b024f12293d22ef137d25c19135953766ffbf2a566e1ad91438676f6a0 not found: ID does not exist" containerID="054e95b024f12293d22ef137d25c19135953766ffbf2a566e1ad91438676f6a0" Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.293250 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"054e95b024f12293d22ef137d25c19135953766ffbf2a566e1ad91438676f6a0"} err="failed to get container status \"054e95b024f12293d22ef137d25c19135953766ffbf2a566e1ad91438676f6a0\": rpc error: code = NotFound desc = could not find container \"054e95b024f12293d22ef137d25c19135953766ffbf2a566e1ad91438676f6a0\": container with ID starting with 054e95b024f12293d22ef137d25c19135953766ffbf2a566e1ad91438676f6a0 not found: ID does not exist" Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.293265 4520 scope.go:117] "RemoveContainer" containerID="262e0cf10792038e17c9535c842bb850c34802d1edf6585f98c352abd0f2a350" Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.453236 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-qwxpw" Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.486251 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f744cb77-wjhmz"] Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.490580 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f744cb77-wjhmz"] Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.542026 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/638e5bb8-4a2a-42a5-ab4c-fd75e93b8efa-operator-scripts\") pod \"638e5bb8-4a2a-42a5-ab4c-fd75e93b8efa\" (UID: \"638e5bb8-4a2a-42a5-ab4c-fd75e93b8efa\") " Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.542081 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lscvb\" (UniqueName: \"kubernetes.io/projected/638e5bb8-4a2a-42a5-ab4c-fd75e93b8efa-kube-api-access-lscvb\") pod \"638e5bb8-4a2a-42a5-ab4c-fd75e93b8efa\" (UID: \"638e5bb8-4a2a-42a5-ab4c-fd75e93b8efa\") " Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.542661 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/638e5bb8-4a2a-42a5-ab4c-fd75e93b8efa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "638e5bb8-4a2a-42a5-ab4c-fd75e93b8efa" (UID: "638e5bb8-4a2a-42a5-ab4c-fd75e93b8efa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.546084 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/638e5bb8-4a2a-42a5-ab4c-fd75e93b8efa-kube-api-access-lscvb" (OuterVolumeSpecName: "kube-api-access-lscvb") pod "638e5bb8-4a2a-42a5-ab4c-fd75e93b8efa" (UID: "638e5bb8-4a2a-42a5-ab4c-fd75e93b8efa"). InnerVolumeSpecName "kube-api-access-lscvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.644584 4520 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/638e5bb8-4a2a-42a5-ab4c-fd75e93b8efa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.644841 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lscvb\" (UniqueName: \"kubernetes.io/projected/638e5bb8-4a2a-42a5-ab4c-fd75e93b8efa-kube-api-access-lscvb\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.708244 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8afe178-46ca-433c-8ce0-b0ab1fb61ffb" path="/var/lib/kubelet/pods/d8afe178-46ca-433c-8ce0-b0ab1fb61ffb/volumes" Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.917685 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-744gm" Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.923746 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4575-account-create-update-pxcsl" Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.929898 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-38e4-account-create-update-vrcjp" Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.935617 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9hjf2" Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.944439 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2f92-account-create-update-4mpvt" Jan 30 06:58:58 crc kubenswrapper[4520]: I0130 06:58:58.951221 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bh5dv" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.050765 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knwbb\" (UniqueName: \"kubernetes.io/projected/87fa59f0-b4fd-472f-a612-b79fc97fec36-kube-api-access-knwbb\") pod \"87fa59f0-b4fd-472f-a612-b79fc97fec36\" (UID: \"87fa59f0-b4fd-472f-a612-b79fc97fec36\") " Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.050820 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c74d5704-4893-46ce-912f-20805c99608c-operator-scripts\") pod \"c74d5704-4893-46ce-912f-20805c99608c\" (UID: \"c74d5704-4893-46ce-912f-20805c99608c\") " Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.050858 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4st8d\" (UniqueName: \"kubernetes.io/projected/153de015-b81a-4456-98af-6d7de7c63c41-kube-api-access-4st8d\") pod \"153de015-b81a-4456-98af-6d7de7c63c41\" (UID: \"153de015-b81a-4456-98af-6d7de7c63c41\") " Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.051014 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmf8p\" (UniqueName: \"kubernetes.io/projected/96657e38-5386-49e4-9ea3-b12a72c31fdf-kube-api-access-bmf8p\") pod \"96657e38-5386-49e4-9ea3-b12a72c31fdf\" (UID: \"96657e38-5386-49e4-9ea3-b12a72c31fdf\") " Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.051045 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e19c8d9-7b99-4827-9b6b-c786a3600c46-operator-scripts\") pod \"9e19c8d9-7b99-4827-9b6b-c786a3600c46\" (UID: \"9e19c8d9-7b99-4827-9b6b-c786a3600c46\") " Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.051072 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66e50498-b61c-48eb-bd9b-002ad02fa6a0-operator-scripts\") pod \"66e50498-b61c-48eb-bd9b-002ad02fa6a0\" (UID: \"66e50498-b61c-48eb-bd9b-002ad02fa6a0\") " Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.051116 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8bbk\" (UniqueName: \"kubernetes.io/projected/9e19c8d9-7b99-4827-9b6b-c786a3600c46-kube-api-access-j8bbk\") pod \"9e19c8d9-7b99-4827-9b6b-c786a3600c46\" (UID: \"9e19c8d9-7b99-4827-9b6b-c786a3600c46\") " Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.051148 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/153de015-b81a-4456-98af-6d7de7c63c41-operator-scripts\") pod \"153de015-b81a-4456-98af-6d7de7c63c41\" (UID: \"153de015-b81a-4456-98af-6d7de7c63c41\") " Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.051168 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87fa59f0-b4fd-472f-a612-b79fc97fec36-operator-scripts\") pod \"87fa59f0-b4fd-472f-a612-b79fc97fec36\" (UID: \"87fa59f0-b4fd-472f-a612-b79fc97fec36\") " Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.051216 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn8dt\" (UniqueName: \"kubernetes.io/projected/66e50498-b61c-48eb-bd9b-002ad02fa6a0-kube-api-access-rn8dt\") pod \"66e50498-b61c-48eb-bd9b-002ad02fa6a0\" (UID: \"66e50498-b61c-48eb-bd9b-002ad02fa6a0\") " Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.051246 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96657e38-5386-49e4-9ea3-b12a72c31fdf-operator-scripts\") pod \"96657e38-5386-49e4-9ea3-b12a72c31fdf\" (UID: \"96657e38-5386-49e4-9ea3-b12a72c31fdf\") " Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.051277 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkwr9\" (UniqueName: \"kubernetes.io/projected/c74d5704-4893-46ce-912f-20805c99608c-kube-api-access-bkwr9\") pod \"c74d5704-4893-46ce-912f-20805c99608c\" (UID: \"c74d5704-4893-46ce-912f-20805c99608c\") " Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.052201 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66e50498-b61c-48eb-bd9b-002ad02fa6a0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "66e50498-b61c-48eb-bd9b-002ad02fa6a0" (UID: "66e50498-b61c-48eb-bd9b-002ad02fa6a0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.053172 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87fa59f0-b4fd-472f-a612-b79fc97fec36-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "87fa59f0-b4fd-472f-a612-b79fc97fec36" (UID: "87fa59f0-b4fd-472f-a612-b79fc97fec36"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.053700 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c74d5704-4893-46ce-912f-20805c99608c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c74d5704-4893-46ce-912f-20805c99608c" (UID: "c74d5704-4893-46ce-912f-20805c99608c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.053768 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/153de015-b81a-4456-98af-6d7de7c63c41-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "153de015-b81a-4456-98af-6d7de7c63c41" (UID: "153de015-b81a-4456-98af-6d7de7c63c41"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.054306 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e19c8d9-7b99-4827-9b6b-c786a3600c46-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9e19c8d9-7b99-4827-9b6b-c786a3600c46" (UID: "9e19c8d9-7b99-4827-9b6b-c786a3600c46"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.054326 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96657e38-5386-49e4-9ea3-b12a72c31fdf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "96657e38-5386-49e4-9ea3-b12a72c31fdf" (UID: "96657e38-5386-49e4-9ea3-b12a72c31fdf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.058355 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96657e38-5386-49e4-9ea3-b12a72c31fdf-kube-api-access-bmf8p" (OuterVolumeSpecName: "kube-api-access-bmf8p") pod "96657e38-5386-49e4-9ea3-b12a72c31fdf" (UID: "96657e38-5386-49e4-9ea3-b12a72c31fdf"). InnerVolumeSpecName "kube-api-access-bmf8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.058997 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c74d5704-4893-46ce-912f-20805c99608c-kube-api-access-bkwr9" (OuterVolumeSpecName: "kube-api-access-bkwr9") pod "c74d5704-4893-46ce-912f-20805c99608c" (UID: "c74d5704-4893-46ce-912f-20805c99608c"). InnerVolumeSpecName "kube-api-access-bkwr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.059564 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87fa59f0-b4fd-472f-a612-b79fc97fec36-kube-api-access-knwbb" (OuterVolumeSpecName: "kube-api-access-knwbb") pod "87fa59f0-b4fd-472f-a612-b79fc97fec36" (UID: "87fa59f0-b4fd-472f-a612-b79fc97fec36"). InnerVolumeSpecName "kube-api-access-knwbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.060535 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e19c8d9-7b99-4827-9b6b-c786a3600c46-kube-api-access-j8bbk" (OuterVolumeSpecName: "kube-api-access-j8bbk") pod "9e19c8d9-7b99-4827-9b6b-c786a3600c46" (UID: "9e19c8d9-7b99-4827-9b6b-c786a3600c46"). InnerVolumeSpecName "kube-api-access-j8bbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.060628 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66e50498-b61c-48eb-bd9b-002ad02fa6a0-kube-api-access-rn8dt" (OuterVolumeSpecName: "kube-api-access-rn8dt") pod "66e50498-b61c-48eb-bd9b-002ad02fa6a0" (UID: "66e50498-b61c-48eb-bd9b-002ad02fa6a0"). InnerVolumeSpecName "kube-api-access-rn8dt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.066334 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/153de015-b81a-4456-98af-6d7de7c63c41-kube-api-access-4st8d" (OuterVolumeSpecName: "kube-api-access-4st8d") pod "153de015-b81a-4456-98af-6d7de7c63c41" (UID: "153de015-b81a-4456-98af-6d7de7c63c41"). InnerVolumeSpecName "kube-api-access-4st8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.153859 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmf8p\" (UniqueName: \"kubernetes.io/projected/96657e38-5386-49e4-9ea3-b12a72c31fdf-kube-api-access-bmf8p\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.153891 4520 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e19c8d9-7b99-4827-9b6b-c786a3600c46-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.153902 4520 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66e50498-b61c-48eb-bd9b-002ad02fa6a0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.153912 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8bbk\" (UniqueName: \"kubernetes.io/projected/9e19c8d9-7b99-4827-9b6b-c786a3600c46-kube-api-access-j8bbk\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.153957 4520 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/153de015-b81a-4456-98af-6d7de7c63c41-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.153967 4520 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87fa59f0-b4fd-472f-a612-b79fc97fec36-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.153976 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rn8dt\" (UniqueName: \"kubernetes.io/projected/66e50498-b61c-48eb-bd9b-002ad02fa6a0-kube-api-access-rn8dt\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.153987 4520 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96657e38-5386-49e4-9ea3-b12a72c31fdf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.153997 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkwr9\" (UniqueName: \"kubernetes.io/projected/c74d5704-4893-46ce-912f-20805c99608c-kube-api-access-bkwr9\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.154007 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knwbb\" (UniqueName: \"kubernetes.io/projected/87fa59f0-b4fd-472f-a612-b79fc97fec36-kube-api-access-knwbb\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.154016 4520 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c74d5704-4893-46ce-912f-20805c99608c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.154025 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4st8d\" (UniqueName: \"kubernetes.io/projected/153de015-b81a-4456-98af-6d7de7c63c41-kube-api-access-4st8d\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.176160 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerStarted","Data":"00188edbc7a901128a316b70d44312dd0aa78297ee86dd9a3630c6ec14392173"} Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.178133 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-qwxpw" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.179961 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-qwxpw" event={"ID":"638e5bb8-4a2a-42a5-ab4c-fd75e93b8efa","Type":"ContainerDied","Data":"1de0bb5418cd38ab7816016957540dea1c8f7c1926829c2dea95edc3c6519ac3"} Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.180023 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1de0bb5418cd38ab7816016957540dea1c8f7c1926829c2dea95edc3c6519ac3" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.180957 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-744gm" event={"ID":"c74d5704-4893-46ce-912f-20805c99608c","Type":"ContainerDied","Data":"da4b3c11b8262726d6d0a0d1ec7113970811b46759dc4194baf6f5b9b90fd822"} Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.181065 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da4b3c11b8262726d6d0a0d1ec7113970811b46759dc4194baf6f5b9b90fd822" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.181078 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-744gm" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.182919 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9hjf2" event={"ID":"153de015-b81a-4456-98af-6d7de7c63c41","Type":"ContainerDied","Data":"7bcad060ea3ee99de552ab4ea6a8d713acb2d23fbb6f5fcd12e97ff1d3172ef1"} Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.183168 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bcad060ea3ee99de552ab4ea6a8d713acb2d23fbb6f5fcd12e97ff1d3172ef1" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.183078 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9hjf2" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.184764 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2f92-account-create-update-4mpvt" event={"ID":"9e19c8d9-7b99-4827-9b6b-c786a3600c46","Type":"ContainerDied","Data":"fd3b85054365b2213f5a18a065dffe99cc71bde4d0f12a6b1bef8b19a1dd705c"} Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.184780 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2f92-account-create-update-4mpvt" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.185084 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd3b85054365b2213f5a18a065dffe99cc71bde4d0f12a6b1bef8b19a1dd705c" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.186350 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bh5dv" event={"ID":"87fa59f0-b4fd-472f-a612-b79fc97fec36","Type":"ContainerDied","Data":"1667de82c1e4d3fc0ef1f3871b832d48cb1996683f5c947f6747b71d0b504c37"} Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.186382 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bh5dv" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.186389 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1667de82c1e4d3fc0ef1f3871b832d48cb1996683f5c947f6747b71d0b504c37" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.188711 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4575-account-create-update-pxcsl" event={"ID":"66e50498-b61c-48eb-bd9b-002ad02fa6a0","Type":"ContainerDied","Data":"3fb98a00e6b6b19af2e3b8e98d89afc551d925cd26e504b3139f08ec74fe1d55"} Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.188732 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4575-account-create-update-pxcsl" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.188740 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fb98a00e6b6b19af2e3b8e98d89afc551d925cd26e504b3139f08ec74fe1d55" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.198842 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-38e4-account-create-update-vrcjp" event={"ID":"96657e38-5386-49e4-9ea3-b12a72c31fdf","Type":"ContainerDied","Data":"06c46b88d55d66c770a27b24a45c8d7330dfc1feadc01dd5b88244430a0cb1d3"} Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.198864 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06c46b88d55d66c770a27b24a45c8d7330dfc1feadc01dd5b88244430a0cb1d3" Jan 30 06:58:59 crc kubenswrapper[4520]: I0130 06:58:59.198896 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-38e4-account-create-update-vrcjp" Jan 30 06:59:01 crc kubenswrapper[4520]: I0130 06:59:01.907621 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1d0bd1d1-935d-458c-9cf8-c11455791a64-etc-swift\") pod \"swift-storage-0\" (UID: \"1d0bd1d1-935d-458c-9cf8-c11455791a64\") " pod="openstack/swift-storage-0" Jan 30 06:59:01 crc kubenswrapper[4520]: E0130 06:59:01.907830 4520 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 06:59:01 crc kubenswrapper[4520]: E0130 06:59:01.908088 4520 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 06:59:01 crc kubenswrapper[4520]: E0130 06:59:01.908143 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d0bd1d1-935d-458c-9cf8-c11455791a64-etc-swift podName:1d0bd1d1-935d-458c-9cf8-c11455791a64 nodeName:}" failed. No retries permitted until 2026-01-30 06:59:17.908127173 +0000 UTC m=+871.536479355 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1d0bd1d1-935d-458c-9cf8-c11455791a64-etc-swift") pod "swift-storage-0" (UID: "1d0bd1d1-935d-458c-9cf8-c11455791a64") : configmap "swift-ring-files" not found Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.134163 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-fglmz"] Jan 30 06:59:03 crc kubenswrapper[4520]: E0130 06:59:03.134987 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66e50498-b61c-48eb-bd9b-002ad02fa6a0" containerName="mariadb-account-create-update" Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.135002 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="66e50498-b61c-48eb-bd9b-002ad02fa6a0" containerName="mariadb-account-create-update" Jan 30 06:59:03 crc kubenswrapper[4520]: E0130 06:59:03.135013 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e19c8d9-7b99-4827-9b6b-c786a3600c46" containerName="mariadb-account-create-update" Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.135019 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e19c8d9-7b99-4827-9b6b-c786a3600c46" containerName="mariadb-account-create-update" Jan 30 06:59:03 crc kubenswrapper[4520]: E0130 06:59:03.135028 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87fa59f0-b4fd-472f-a612-b79fc97fec36" containerName="mariadb-database-create" Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.135034 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="87fa59f0-b4fd-472f-a612-b79fc97fec36" containerName="mariadb-database-create" Jan 30 06:59:03 crc kubenswrapper[4520]: E0130 06:59:03.135049 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96657e38-5386-49e4-9ea3-b12a72c31fdf" containerName="mariadb-account-create-update" Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.135055 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="96657e38-5386-49e4-9ea3-b12a72c31fdf" containerName="mariadb-account-create-update" Jan 30 06:59:03 crc kubenswrapper[4520]: E0130 06:59:03.135063 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c74d5704-4893-46ce-912f-20805c99608c" containerName="mariadb-database-create" Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.135068 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="c74d5704-4893-46ce-912f-20805c99608c" containerName="mariadb-database-create" Jan 30 06:59:03 crc kubenswrapper[4520]: E0130 06:59:03.135080 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="638e5bb8-4a2a-42a5-ab4c-fd75e93b8efa" containerName="mariadb-database-create" Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.135086 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="638e5bb8-4a2a-42a5-ab4c-fd75e93b8efa" containerName="mariadb-database-create" Jan 30 06:59:03 crc kubenswrapper[4520]: E0130 06:59:03.135103 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="153de015-b81a-4456-98af-6d7de7c63c41" containerName="mariadb-account-create-update" Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.135110 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="153de015-b81a-4456-98af-6d7de7c63c41" containerName="mariadb-account-create-update" Jan 30 06:59:03 crc kubenswrapper[4520]: E0130 06:59:03.135118 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8afe178-46ca-433c-8ce0-b0ab1fb61ffb" containerName="init" Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.135123 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8afe178-46ca-433c-8ce0-b0ab1fb61ffb" containerName="init" Jan 30 06:59:03 crc kubenswrapper[4520]: E0130 06:59:03.135133 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8afe178-46ca-433c-8ce0-b0ab1fb61ffb" containerName="dnsmasq-dns" Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.135138 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8afe178-46ca-433c-8ce0-b0ab1fb61ffb" containerName="dnsmasq-dns" Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.135287 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="96657e38-5386-49e4-9ea3-b12a72c31fdf" containerName="mariadb-account-create-update" Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.135303 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="66e50498-b61c-48eb-bd9b-002ad02fa6a0" containerName="mariadb-account-create-update" Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.135311 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="638e5bb8-4a2a-42a5-ab4c-fd75e93b8efa" containerName="mariadb-database-create" Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.135319 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e19c8d9-7b99-4827-9b6b-c786a3600c46" containerName="mariadb-account-create-update" Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.135326 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8afe178-46ca-433c-8ce0-b0ab1fb61ffb" containerName="dnsmasq-dns" Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.135333 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="c74d5704-4893-46ce-912f-20805c99608c" containerName="mariadb-database-create" Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.135341 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="87fa59f0-b4fd-472f-a612-b79fc97fec36" containerName="mariadb-database-create" Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.135350 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="153de015-b81a-4456-98af-6d7de7c63c41" containerName="mariadb-account-create-update" Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.135849 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-fglmz" Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.138104 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ndhjm" Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.138724 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.156860 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-fglmz"] Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.233160 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddd50154-e55a-4dae-ac2d-3528b94ff9f6-combined-ca-bundle\") pod \"glance-db-sync-fglmz\" (UID: \"ddd50154-e55a-4dae-ac2d-3528b94ff9f6\") " pod="openstack/glance-db-sync-fglmz" Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.233201 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbszc\" (UniqueName: \"kubernetes.io/projected/ddd50154-e55a-4dae-ac2d-3528b94ff9f6-kube-api-access-gbszc\") pod \"glance-db-sync-fglmz\" (UID: \"ddd50154-e55a-4dae-ac2d-3528b94ff9f6\") " pod="openstack/glance-db-sync-fglmz" Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.233230 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ddd50154-e55a-4dae-ac2d-3528b94ff9f6-db-sync-config-data\") pod \"glance-db-sync-fglmz\" (UID: \"ddd50154-e55a-4dae-ac2d-3528b94ff9f6\") " pod="openstack/glance-db-sync-fglmz" Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.233335 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddd50154-e55a-4dae-ac2d-3528b94ff9f6-config-data\") pod \"glance-db-sync-fglmz\" (UID: \"ddd50154-e55a-4dae-ac2d-3528b94ff9f6\") " pod="openstack/glance-db-sync-fglmz" Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.236008 4520 generic.go:334] "Generic (PLEG): container finished" podID="49410419-7629-431c-9f17-b66263889ede" containerID="4f6b8ba2cada174ccd7cf980796362004f38d492af1605abd0c7d9b05f55c460" exitCode=0 Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.236061 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-97w8x" event={"ID":"49410419-7629-431c-9f17-b66263889ede","Type":"ContainerDied","Data":"4f6b8ba2cada174ccd7cf980796362004f38d492af1605abd0c7d9b05f55c460"} Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.334911 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddd50154-e55a-4dae-ac2d-3528b94ff9f6-combined-ca-bundle\") pod \"glance-db-sync-fglmz\" (UID: \"ddd50154-e55a-4dae-ac2d-3528b94ff9f6\") " pod="openstack/glance-db-sync-fglmz" Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.334952 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbszc\" (UniqueName: \"kubernetes.io/projected/ddd50154-e55a-4dae-ac2d-3528b94ff9f6-kube-api-access-gbszc\") pod \"glance-db-sync-fglmz\" (UID: \"ddd50154-e55a-4dae-ac2d-3528b94ff9f6\") " pod="openstack/glance-db-sync-fglmz" Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.334987 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ddd50154-e55a-4dae-ac2d-3528b94ff9f6-db-sync-config-data\") pod \"glance-db-sync-fglmz\" (UID: \"ddd50154-e55a-4dae-ac2d-3528b94ff9f6\") " pod="openstack/glance-db-sync-fglmz" Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.335012 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddd50154-e55a-4dae-ac2d-3528b94ff9f6-config-data\") pod \"glance-db-sync-fglmz\" (UID: \"ddd50154-e55a-4dae-ac2d-3528b94ff9f6\") " pod="openstack/glance-db-sync-fglmz" Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.352168 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ddd50154-e55a-4dae-ac2d-3528b94ff9f6-db-sync-config-data\") pod \"glance-db-sync-fglmz\" (UID: \"ddd50154-e55a-4dae-ac2d-3528b94ff9f6\") " pod="openstack/glance-db-sync-fglmz" Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.354989 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddd50154-e55a-4dae-ac2d-3528b94ff9f6-combined-ca-bundle\") pod \"glance-db-sync-fglmz\" (UID: \"ddd50154-e55a-4dae-ac2d-3528b94ff9f6\") " pod="openstack/glance-db-sync-fglmz" Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.355478 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddd50154-e55a-4dae-ac2d-3528b94ff9f6-config-data\") pod \"glance-db-sync-fglmz\" (UID: \"ddd50154-e55a-4dae-ac2d-3528b94ff9f6\") " pod="openstack/glance-db-sync-fglmz" Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.366998 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbszc\" (UniqueName: \"kubernetes.io/projected/ddd50154-e55a-4dae-ac2d-3528b94ff9f6-kube-api-access-gbszc\") pod \"glance-db-sync-fglmz\" (UID: \"ddd50154-e55a-4dae-ac2d-3528b94ff9f6\") " pod="openstack/glance-db-sync-fglmz" Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.449270 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-fglmz" Jan 30 06:59:03 crc kubenswrapper[4520]: I0130 06:59:03.935761 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-fglmz"] Jan 30 06:59:04 crc kubenswrapper[4520]: I0130 06:59:04.245046 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-fglmz" event={"ID":"ddd50154-e55a-4dae-ac2d-3528b94ff9f6","Type":"ContainerStarted","Data":"dfbfe19373dfecb2f9195f81a13443981243ad194b4682e3f57304548f3e2bd5"} Jan 30 06:59:04 crc kubenswrapper[4520]: I0130 06:59:04.489002 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-97w8x" Jan 30 06:59:04 crc kubenswrapper[4520]: I0130 06:59:04.665035 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/49410419-7629-431c-9f17-b66263889ede-swiftconf\") pod \"49410419-7629-431c-9f17-b66263889ede\" (UID: \"49410419-7629-431c-9f17-b66263889ede\") " Jan 30 06:59:04 crc kubenswrapper[4520]: I0130 06:59:04.665105 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/49410419-7629-431c-9f17-b66263889ede-scripts\") pod \"49410419-7629-431c-9f17-b66263889ede\" (UID: \"49410419-7629-431c-9f17-b66263889ede\") " Jan 30 06:59:04 crc kubenswrapper[4520]: I0130 06:59:04.665189 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/49410419-7629-431c-9f17-b66263889ede-etc-swift\") pod \"49410419-7629-431c-9f17-b66263889ede\" (UID: \"49410419-7629-431c-9f17-b66263889ede\") " Jan 30 06:59:04 crc kubenswrapper[4520]: I0130 06:59:04.665260 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhtqx\" (UniqueName: \"kubernetes.io/projected/49410419-7629-431c-9f17-b66263889ede-kube-api-access-dhtqx\") pod \"49410419-7629-431c-9f17-b66263889ede\" (UID: \"49410419-7629-431c-9f17-b66263889ede\") " Jan 30 06:59:04 crc kubenswrapper[4520]: I0130 06:59:04.665345 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/49410419-7629-431c-9f17-b66263889ede-dispersionconf\") pod \"49410419-7629-431c-9f17-b66263889ede\" (UID: \"49410419-7629-431c-9f17-b66263889ede\") " Jan 30 06:59:04 crc kubenswrapper[4520]: I0130 06:59:04.665381 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49410419-7629-431c-9f17-b66263889ede-combined-ca-bundle\") pod \"49410419-7629-431c-9f17-b66263889ede\" (UID: \"49410419-7629-431c-9f17-b66263889ede\") " Jan 30 06:59:04 crc kubenswrapper[4520]: I0130 06:59:04.665448 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/49410419-7629-431c-9f17-b66263889ede-ring-data-devices\") pod \"49410419-7629-431c-9f17-b66263889ede\" (UID: \"49410419-7629-431c-9f17-b66263889ede\") " Jan 30 06:59:04 crc kubenswrapper[4520]: I0130 06:59:04.666365 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49410419-7629-431c-9f17-b66263889ede-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "49410419-7629-431c-9f17-b66263889ede" (UID: "49410419-7629-431c-9f17-b66263889ede"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:59:04 crc kubenswrapper[4520]: I0130 06:59:04.666833 4520 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/49410419-7629-431c-9f17-b66263889ede-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:04 crc kubenswrapper[4520]: I0130 06:59:04.667125 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49410419-7629-431c-9f17-b66263889ede-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "49410419-7629-431c-9f17-b66263889ede" (UID: "49410419-7629-431c-9f17-b66263889ede"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:59:04 crc kubenswrapper[4520]: I0130 06:59:04.672429 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49410419-7629-431c-9f17-b66263889ede-kube-api-access-dhtqx" (OuterVolumeSpecName: "kube-api-access-dhtqx") pod "49410419-7629-431c-9f17-b66263889ede" (UID: "49410419-7629-431c-9f17-b66263889ede"). InnerVolumeSpecName "kube-api-access-dhtqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:59:04 crc kubenswrapper[4520]: I0130 06:59:04.698122 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49410419-7629-431c-9f17-b66263889ede-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "49410419-7629-431c-9f17-b66263889ede" (UID: "49410419-7629-431c-9f17-b66263889ede"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:59:04 crc kubenswrapper[4520]: I0130 06:59:04.698863 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49410419-7629-431c-9f17-b66263889ede-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "49410419-7629-431c-9f17-b66263889ede" (UID: "49410419-7629-431c-9f17-b66263889ede"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:59:04 crc kubenswrapper[4520]: I0130 06:59:04.701266 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49410419-7629-431c-9f17-b66263889ede-scripts" (OuterVolumeSpecName: "scripts") pod "49410419-7629-431c-9f17-b66263889ede" (UID: "49410419-7629-431c-9f17-b66263889ede"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:59:04 crc kubenswrapper[4520]: I0130 06:59:04.713826 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49410419-7629-431c-9f17-b66263889ede-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49410419-7629-431c-9f17-b66263889ede" (UID: "49410419-7629-431c-9f17-b66263889ede"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:59:04 crc kubenswrapper[4520]: I0130 06:59:04.762579 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9hjf2"] Jan 30 06:59:04 crc kubenswrapper[4520]: I0130 06:59:04.766891 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-9hjf2"] Jan 30 06:59:04 crc kubenswrapper[4520]: I0130 06:59:04.767821 4520 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/49410419-7629-431c-9f17-b66263889ede-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:04 crc kubenswrapper[4520]: I0130 06:59:04.767846 4520 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/49410419-7629-431c-9f17-b66263889ede-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:04 crc kubenswrapper[4520]: I0130 06:59:04.767857 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhtqx\" (UniqueName: \"kubernetes.io/projected/49410419-7629-431c-9f17-b66263889ede-kube-api-access-dhtqx\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:04 crc kubenswrapper[4520]: I0130 06:59:04.767867 4520 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/49410419-7629-431c-9f17-b66263889ede-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:04 crc kubenswrapper[4520]: I0130 06:59:04.767876 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49410419-7629-431c-9f17-b66263889ede-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:04 crc kubenswrapper[4520]: I0130 06:59:04.767884 4520 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/49410419-7629-431c-9f17-b66263889ede-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:05 crc kubenswrapper[4520]: I0130 06:59:05.256358 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-97w8x" event={"ID":"49410419-7629-431c-9f17-b66263889ede","Type":"ContainerDied","Data":"37651600b8566d4c343cddc258c7d7ab364de2d8a1ecedc86ccc0673d1b32403"} Jan 30 06:59:05 crc kubenswrapper[4520]: I0130 06:59:05.256451 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37651600b8566d4c343cddc258c7d7ab364de2d8a1ecedc86ccc0673d1b32403" Jan 30 06:59:05 crc kubenswrapper[4520]: I0130 06:59:05.256401 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-97w8x" Jan 30 06:59:06 crc kubenswrapper[4520]: I0130 06:59:06.699721 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="153de015-b81a-4456-98af-6d7de7c63c41" path="/var/lib/kubelet/pods/153de015-b81a-4456-98af-6d7de7c63c41/volumes" Jan 30 06:59:08 crc kubenswrapper[4520]: I0130 06:59:08.302042 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 30 06:59:09 crc kubenswrapper[4520]: I0130 06:59:09.770364 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-rfg99"] Jan 30 06:59:09 crc kubenswrapper[4520]: E0130 06:59:09.771591 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49410419-7629-431c-9f17-b66263889ede" containerName="swift-ring-rebalance" Jan 30 06:59:09 crc kubenswrapper[4520]: I0130 06:59:09.771682 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="49410419-7629-431c-9f17-b66263889ede" containerName="swift-ring-rebalance" Jan 30 06:59:09 crc kubenswrapper[4520]: I0130 06:59:09.771953 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="49410419-7629-431c-9f17-b66263889ede" containerName="swift-ring-rebalance" Jan 30 06:59:09 crc kubenswrapper[4520]: I0130 06:59:09.772640 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rfg99" Jan 30 06:59:09 crc kubenswrapper[4520]: I0130 06:59:09.776137 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 06:59:09 crc kubenswrapper[4520]: I0130 06:59:09.778880 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-rfg99"] Jan 30 06:59:09 crc kubenswrapper[4520]: I0130 06:59:09.884717 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/849821f7-8f89-49da-b649-5cd380b989a7-operator-scripts\") pod \"root-account-create-update-rfg99\" (UID: \"849821f7-8f89-49da-b649-5cd380b989a7\") " pod="openstack/root-account-create-update-rfg99" Jan 30 06:59:09 crc kubenswrapper[4520]: I0130 06:59:09.885054 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kphqc\" (UniqueName: \"kubernetes.io/projected/849821f7-8f89-49da-b649-5cd380b989a7-kube-api-access-kphqc\") pod \"root-account-create-update-rfg99\" (UID: \"849821f7-8f89-49da-b649-5cd380b989a7\") " pod="openstack/root-account-create-update-rfg99" Jan 30 06:59:09 crc kubenswrapper[4520]: I0130 06:59:09.992809 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/849821f7-8f89-49da-b649-5cd380b989a7-operator-scripts\") pod \"root-account-create-update-rfg99\" (UID: \"849821f7-8f89-49da-b649-5cd380b989a7\") " pod="openstack/root-account-create-update-rfg99" Jan 30 06:59:09 crc kubenswrapper[4520]: I0130 06:59:09.992937 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kphqc\" (UniqueName: \"kubernetes.io/projected/849821f7-8f89-49da-b649-5cd380b989a7-kube-api-access-kphqc\") pod \"root-account-create-update-rfg99\" (UID: \"849821f7-8f89-49da-b649-5cd380b989a7\") " pod="openstack/root-account-create-update-rfg99" Jan 30 06:59:09 crc kubenswrapper[4520]: I0130 06:59:09.993734 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/849821f7-8f89-49da-b649-5cd380b989a7-operator-scripts\") pod \"root-account-create-update-rfg99\" (UID: \"849821f7-8f89-49da-b649-5cd380b989a7\") " pod="openstack/root-account-create-update-rfg99" Jan 30 06:59:10 crc kubenswrapper[4520]: I0130 06:59:10.010799 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kphqc\" (UniqueName: \"kubernetes.io/projected/849821f7-8f89-49da-b649-5cd380b989a7-kube-api-access-kphqc\") pod \"root-account-create-update-rfg99\" (UID: \"849821f7-8f89-49da-b649-5cd380b989a7\") " pod="openstack/root-account-create-update-rfg99" Jan 30 06:59:10 crc kubenswrapper[4520]: I0130 06:59:10.090227 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rfg99" Jan 30 06:59:10 crc kubenswrapper[4520]: I0130 06:59:10.432664 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-rfg99"] Jan 30 06:59:11 crc kubenswrapper[4520]: I0130 06:59:11.329632 4520 generic.go:334] "Generic (PLEG): container finished" podID="849821f7-8f89-49da-b649-5cd380b989a7" containerID="372522daaefb177fec3e4a8baa548fefb8b78690aaf6fe6803803b69586eb98e" exitCode=0 Jan 30 06:59:11 crc kubenswrapper[4520]: I0130 06:59:11.330012 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rfg99" event={"ID":"849821f7-8f89-49da-b649-5cd380b989a7","Type":"ContainerDied","Data":"372522daaefb177fec3e4a8baa548fefb8b78690aaf6fe6803803b69586eb98e"} Jan 30 06:59:11 crc kubenswrapper[4520]: I0130 06:59:11.330050 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rfg99" event={"ID":"849821f7-8f89-49da-b649-5cd380b989a7","Type":"ContainerStarted","Data":"83189fc90c91667de37453e572c83795e16cf9a4c76a9ab192619621a868b367"} Jan 30 06:59:12 crc kubenswrapper[4520]: I0130 06:59:12.619357 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rfg99" Jan 30 06:59:12 crc kubenswrapper[4520]: I0130 06:59:12.752196 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/849821f7-8f89-49da-b649-5cd380b989a7-operator-scripts\") pod \"849821f7-8f89-49da-b649-5cd380b989a7\" (UID: \"849821f7-8f89-49da-b649-5cd380b989a7\") " Jan 30 06:59:12 crc kubenswrapper[4520]: I0130 06:59:12.752253 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kphqc\" (UniqueName: \"kubernetes.io/projected/849821f7-8f89-49da-b649-5cd380b989a7-kube-api-access-kphqc\") pod \"849821f7-8f89-49da-b649-5cd380b989a7\" (UID: \"849821f7-8f89-49da-b649-5cd380b989a7\") " Jan 30 06:59:12 crc kubenswrapper[4520]: I0130 06:59:12.752725 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/849821f7-8f89-49da-b649-5cd380b989a7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "849821f7-8f89-49da-b649-5cd380b989a7" (UID: "849821f7-8f89-49da-b649-5cd380b989a7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:59:12 crc kubenswrapper[4520]: I0130 06:59:12.753400 4520 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/849821f7-8f89-49da-b649-5cd380b989a7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:12 crc kubenswrapper[4520]: I0130 06:59:12.758962 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/849821f7-8f89-49da-b649-5cd380b989a7-kube-api-access-kphqc" (OuterVolumeSpecName: "kube-api-access-kphqc") pod "849821f7-8f89-49da-b649-5cd380b989a7" (UID: "849821f7-8f89-49da-b649-5cd380b989a7"). InnerVolumeSpecName "kube-api-access-kphqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:59:12 crc kubenswrapper[4520]: I0130 06:59:12.855359 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kphqc\" (UniqueName: \"kubernetes.io/projected/849821f7-8f89-49da-b649-5cd380b989a7-kube-api-access-kphqc\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:13 crc kubenswrapper[4520]: I0130 06:59:13.347895 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rfg99" event={"ID":"849821f7-8f89-49da-b649-5cd380b989a7","Type":"ContainerDied","Data":"83189fc90c91667de37453e572c83795e16cf9a4c76a9ab192619621a868b367"} Jan 30 06:59:13 crc kubenswrapper[4520]: I0130 06:59:13.347958 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83189fc90c91667de37453e572c83795e16cf9a4c76a9ab192619621a868b367" Jan 30 06:59:13 crc kubenswrapper[4520]: I0130 06:59:13.347995 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rfg99" Jan 30 06:59:13 crc kubenswrapper[4520]: I0130 06:59:13.680063 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-xrbrq" podUID="1f432695-8546-408b-a2f3-5c5df41a81cf" containerName="ovn-controller" probeResult="failure" output=< Jan 30 06:59:13 crc kubenswrapper[4520]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 30 06:59:13 crc kubenswrapper[4520]: > Jan 30 06:59:13 crc kubenswrapper[4520]: I0130 06:59:13.680215 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-xkmvw" Jan 30 06:59:13 crc kubenswrapper[4520]: I0130 06:59:13.692971 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-xkmvw" Jan 30 06:59:13 crc kubenswrapper[4520]: I0130 06:59:13.896466 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-xrbrq-config-524v5"] Jan 30 06:59:13 crc kubenswrapper[4520]: E0130 06:59:13.896870 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="849821f7-8f89-49da-b649-5cd380b989a7" containerName="mariadb-account-create-update" Jan 30 06:59:13 crc kubenswrapper[4520]: I0130 06:59:13.896890 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="849821f7-8f89-49da-b649-5cd380b989a7" containerName="mariadb-account-create-update" Jan 30 06:59:13 crc kubenswrapper[4520]: I0130 06:59:13.897089 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="849821f7-8f89-49da-b649-5cd380b989a7" containerName="mariadb-account-create-update" Jan 30 06:59:13 crc kubenswrapper[4520]: I0130 06:59:13.897689 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xrbrq-config-524v5" Jan 30 06:59:13 crc kubenswrapper[4520]: I0130 06:59:13.907583 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xrbrq-config-524v5"] Jan 30 06:59:13 crc kubenswrapper[4520]: I0130 06:59:13.909474 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 30 06:59:14 crc kubenswrapper[4520]: I0130 06:59:14.081610 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/520c14a3-4b91-4cfc-b3b3-c72a0b92fb30-scripts\") pod \"ovn-controller-xrbrq-config-524v5\" (UID: \"520c14a3-4b91-4cfc-b3b3-c72a0b92fb30\") " pod="openstack/ovn-controller-xrbrq-config-524v5" Jan 30 06:59:14 crc kubenswrapper[4520]: I0130 06:59:14.081658 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/520c14a3-4b91-4cfc-b3b3-c72a0b92fb30-var-run\") pod \"ovn-controller-xrbrq-config-524v5\" (UID: \"520c14a3-4b91-4cfc-b3b3-c72a0b92fb30\") " pod="openstack/ovn-controller-xrbrq-config-524v5" Jan 30 06:59:14 crc kubenswrapper[4520]: I0130 06:59:14.081688 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvt99\" (UniqueName: \"kubernetes.io/projected/520c14a3-4b91-4cfc-b3b3-c72a0b92fb30-kube-api-access-bvt99\") pod \"ovn-controller-xrbrq-config-524v5\" (UID: \"520c14a3-4b91-4cfc-b3b3-c72a0b92fb30\") " pod="openstack/ovn-controller-xrbrq-config-524v5" Jan 30 06:59:14 crc kubenswrapper[4520]: I0130 06:59:14.081762 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/520c14a3-4b91-4cfc-b3b3-c72a0b92fb30-additional-scripts\") pod \"ovn-controller-xrbrq-config-524v5\" (UID: \"520c14a3-4b91-4cfc-b3b3-c72a0b92fb30\") " pod="openstack/ovn-controller-xrbrq-config-524v5" Jan 30 06:59:14 crc kubenswrapper[4520]: I0130 06:59:14.081796 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/520c14a3-4b91-4cfc-b3b3-c72a0b92fb30-var-run-ovn\") pod \"ovn-controller-xrbrq-config-524v5\" (UID: \"520c14a3-4b91-4cfc-b3b3-c72a0b92fb30\") " pod="openstack/ovn-controller-xrbrq-config-524v5" Jan 30 06:59:14 crc kubenswrapper[4520]: I0130 06:59:14.081931 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/520c14a3-4b91-4cfc-b3b3-c72a0b92fb30-var-log-ovn\") pod \"ovn-controller-xrbrq-config-524v5\" (UID: \"520c14a3-4b91-4cfc-b3b3-c72a0b92fb30\") " pod="openstack/ovn-controller-xrbrq-config-524v5" Jan 30 06:59:14 crc kubenswrapper[4520]: I0130 06:59:14.183137 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/520c14a3-4b91-4cfc-b3b3-c72a0b92fb30-var-log-ovn\") pod \"ovn-controller-xrbrq-config-524v5\" (UID: \"520c14a3-4b91-4cfc-b3b3-c72a0b92fb30\") " pod="openstack/ovn-controller-xrbrq-config-524v5" Jan 30 06:59:14 crc kubenswrapper[4520]: I0130 06:59:14.183236 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/520c14a3-4b91-4cfc-b3b3-c72a0b92fb30-scripts\") pod \"ovn-controller-xrbrq-config-524v5\" (UID: \"520c14a3-4b91-4cfc-b3b3-c72a0b92fb30\") " pod="openstack/ovn-controller-xrbrq-config-524v5" Jan 30 06:59:14 crc kubenswrapper[4520]: I0130 06:59:14.183259 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/520c14a3-4b91-4cfc-b3b3-c72a0b92fb30-var-run\") pod \"ovn-controller-xrbrq-config-524v5\" (UID: \"520c14a3-4b91-4cfc-b3b3-c72a0b92fb30\") " pod="openstack/ovn-controller-xrbrq-config-524v5" Jan 30 06:59:14 crc kubenswrapper[4520]: I0130 06:59:14.183282 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvt99\" (UniqueName: \"kubernetes.io/projected/520c14a3-4b91-4cfc-b3b3-c72a0b92fb30-kube-api-access-bvt99\") pod \"ovn-controller-xrbrq-config-524v5\" (UID: \"520c14a3-4b91-4cfc-b3b3-c72a0b92fb30\") " pod="openstack/ovn-controller-xrbrq-config-524v5" Jan 30 06:59:14 crc kubenswrapper[4520]: I0130 06:59:14.183331 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/520c14a3-4b91-4cfc-b3b3-c72a0b92fb30-additional-scripts\") pod \"ovn-controller-xrbrq-config-524v5\" (UID: \"520c14a3-4b91-4cfc-b3b3-c72a0b92fb30\") " pod="openstack/ovn-controller-xrbrq-config-524v5" Jan 30 06:59:14 crc kubenswrapper[4520]: I0130 06:59:14.183362 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/520c14a3-4b91-4cfc-b3b3-c72a0b92fb30-var-run-ovn\") pod \"ovn-controller-xrbrq-config-524v5\" (UID: \"520c14a3-4b91-4cfc-b3b3-c72a0b92fb30\") " pod="openstack/ovn-controller-xrbrq-config-524v5" Jan 30 06:59:14 crc kubenswrapper[4520]: I0130 06:59:14.183736 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/520c14a3-4b91-4cfc-b3b3-c72a0b92fb30-var-run-ovn\") pod \"ovn-controller-xrbrq-config-524v5\" (UID: \"520c14a3-4b91-4cfc-b3b3-c72a0b92fb30\") " pod="openstack/ovn-controller-xrbrq-config-524v5" Jan 30 06:59:14 crc kubenswrapper[4520]: I0130 06:59:14.183824 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/520c14a3-4b91-4cfc-b3b3-c72a0b92fb30-var-log-ovn\") pod \"ovn-controller-xrbrq-config-524v5\" (UID: \"520c14a3-4b91-4cfc-b3b3-c72a0b92fb30\") " pod="openstack/ovn-controller-xrbrq-config-524v5" Jan 30 06:59:14 crc kubenswrapper[4520]: I0130 06:59:14.184194 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/520c14a3-4b91-4cfc-b3b3-c72a0b92fb30-var-run\") pod \"ovn-controller-xrbrq-config-524v5\" (UID: \"520c14a3-4b91-4cfc-b3b3-c72a0b92fb30\") " pod="openstack/ovn-controller-xrbrq-config-524v5" Jan 30 06:59:14 crc kubenswrapper[4520]: I0130 06:59:14.184776 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/520c14a3-4b91-4cfc-b3b3-c72a0b92fb30-additional-scripts\") pod \"ovn-controller-xrbrq-config-524v5\" (UID: \"520c14a3-4b91-4cfc-b3b3-c72a0b92fb30\") " pod="openstack/ovn-controller-xrbrq-config-524v5" Jan 30 06:59:14 crc kubenswrapper[4520]: I0130 06:59:14.185408 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/520c14a3-4b91-4cfc-b3b3-c72a0b92fb30-scripts\") pod \"ovn-controller-xrbrq-config-524v5\" (UID: \"520c14a3-4b91-4cfc-b3b3-c72a0b92fb30\") " pod="openstack/ovn-controller-xrbrq-config-524v5" Jan 30 06:59:14 crc kubenswrapper[4520]: I0130 06:59:14.205776 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvt99\" (UniqueName: \"kubernetes.io/projected/520c14a3-4b91-4cfc-b3b3-c72a0b92fb30-kube-api-access-bvt99\") pod \"ovn-controller-xrbrq-config-524v5\" (UID: \"520c14a3-4b91-4cfc-b3b3-c72a0b92fb30\") " pod="openstack/ovn-controller-xrbrq-config-524v5" Jan 30 06:59:14 crc kubenswrapper[4520]: I0130 06:59:14.223268 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xrbrq-config-524v5" Jan 30 06:59:14 crc kubenswrapper[4520]: I0130 06:59:14.768385 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xrbrq-config-524v5"] Jan 30 06:59:15 crc kubenswrapper[4520]: I0130 06:59:15.365776 4520 generic.go:334] "Generic (PLEG): container finished" podID="520c14a3-4b91-4cfc-b3b3-c72a0b92fb30" containerID="6f894844b125b048d0fffa56897834c5a61f59e0f74641ff50fef0a6621f13f3" exitCode=0 Jan 30 06:59:15 crc kubenswrapper[4520]: I0130 06:59:15.366132 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xrbrq-config-524v5" event={"ID":"520c14a3-4b91-4cfc-b3b3-c72a0b92fb30","Type":"ContainerDied","Data":"6f894844b125b048d0fffa56897834c5a61f59e0f74641ff50fef0a6621f13f3"} Jan 30 06:59:15 crc kubenswrapper[4520]: I0130 06:59:15.366170 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xrbrq-config-524v5" event={"ID":"520c14a3-4b91-4cfc-b3b3-c72a0b92fb30","Type":"ContainerStarted","Data":"8f33d01dd0855b8076b03f3539fb970771186e218ddd311526e0bd629a001c80"} Jan 30 06:59:17 crc kubenswrapper[4520]: I0130 06:59:17.981240 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1d0bd1d1-935d-458c-9cf8-c11455791a64-etc-swift\") pod \"swift-storage-0\" (UID: \"1d0bd1d1-935d-458c-9cf8-c11455791a64\") " pod="openstack/swift-storage-0" Jan 30 06:59:17 crc kubenswrapper[4520]: I0130 06:59:17.990559 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1d0bd1d1-935d-458c-9cf8-c11455791a64-etc-swift\") pod \"swift-storage-0\" (UID: \"1d0bd1d1-935d-458c-9cf8-c11455791a64\") " pod="openstack/swift-storage-0" Jan 30 06:59:18 crc kubenswrapper[4520]: I0130 06:59:18.266539 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 06:59:18 crc kubenswrapper[4520]: I0130 06:59:18.667946 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-xrbrq" Jan 30 06:59:20 crc kubenswrapper[4520]: I0130 06:59:20.408976 4520 generic.go:334] "Generic (PLEG): container finished" podID="8b8c48de-512c-4fd1-b2de-e0e0a4fb8184" containerID="163e771c24eeb7d5133bc8d1013b839f3e5ccdaa9f64759d7a1ab8384a1b0f44" exitCode=0 Jan 30 06:59:20 crc kubenswrapper[4520]: I0130 06:59:20.409052 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184","Type":"ContainerDied","Data":"163e771c24eeb7d5133bc8d1013b839f3e5ccdaa9f64759d7a1ab8384a1b0f44"} Jan 30 06:59:20 crc kubenswrapper[4520]: I0130 06:59:20.411585 4520 generic.go:334] "Generic (PLEG): container finished" podID="fc4abc0f-2827-4636-9942-342593697905" containerID="d7a6d151df430a61dcc4b3c25d238a677c1755d79bfba40e96fd5f6557baebe2" exitCode=0 Jan 30 06:59:20 crc kubenswrapper[4520]: I0130 06:59:20.411632 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fc4abc0f-2827-4636-9942-342593697905","Type":"ContainerDied","Data":"d7a6d151df430a61dcc4b3c25d238a677c1755d79bfba40e96fd5f6557baebe2"} Jan 30 06:59:22 crc kubenswrapper[4520]: I0130 06:59:22.248918 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xrbrq-config-524v5" Jan 30 06:59:22 crc kubenswrapper[4520]: I0130 06:59:22.376926 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/520c14a3-4b91-4cfc-b3b3-c72a0b92fb30-scripts\") pod \"520c14a3-4b91-4cfc-b3b3-c72a0b92fb30\" (UID: \"520c14a3-4b91-4cfc-b3b3-c72a0b92fb30\") " Jan 30 06:59:22 crc kubenswrapper[4520]: I0130 06:59:22.377026 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/520c14a3-4b91-4cfc-b3b3-c72a0b92fb30-var-log-ovn\") pod \"520c14a3-4b91-4cfc-b3b3-c72a0b92fb30\" (UID: \"520c14a3-4b91-4cfc-b3b3-c72a0b92fb30\") " Jan 30 06:59:22 crc kubenswrapper[4520]: I0130 06:59:22.377083 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/520c14a3-4b91-4cfc-b3b3-c72a0b92fb30-var-run-ovn\") pod \"520c14a3-4b91-4cfc-b3b3-c72a0b92fb30\" (UID: \"520c14a3-4b91-4cfc-b3b3-c72a0b92fb30\") " Jan 30 06:59:22 crc kubenswrapper[4520]: I0130 06:59:22.377223 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/520c14a3-4b91-4cfc-b3b3-c72a0b92fb30-var-run\") pod \"520c14a3-4b91-4cfc-b3b3-c72a0b92fb30\" (UID: \"520c14a3-4b91-4cfc-b3b3-c72a0b92fb30\") " Jan 30 06:59:22 crc kubenswrapper[4520]: I0130 06:59:22.377279 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvt99\" (UniqueName: \"kubernetes.io/projected/520c14a3-4b91-4cfc-b3b3-c72a0b92fb30-kube-api-access-bvt99\") pod \"520c14a3-4b91-4cfc-b3b3-c72a0b92fb30\" (UID: \"520c14a3-4b91-4cfc-b3b3-c72a0b92fb30\") " Jan 30 06:59:22 crc kubenswrapper[4520]: I0130 06:59:22.377325 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/520c14a3-4b91-4cfc-b3b3-c72a0b92fb30-additional-scripts\") pod \"520c14a3-4b91-4cfc-b3b3-c72a0b92fb30\" (UID: \"520c14a3-4b91-4cfc-b3b3-c72a0b92fb30\") " Jan 30 06:59:22 crc kubenswrapper[4520]: I0130 06:59:22.378673 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/520c14a3-4b91-4cfc-b3b3-c72a0b92fb30-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "520c14a3-4b91-4cfc-b3b3-c72a0b92fb30" (UID: "520c14a3-4b91-4cfc-b3b3-c72a0b92fb30"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:59:22 crc kubenswrapper[4520]: I0130 06:59:22.379344 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/520c14a3-4b91-4cfc-b3b3-c72a0b92fb30-scripts" (OuterVolumeSpecName: "scripts") pod "520c14a3-4b91-4cfc-b3b3-c72a0b92fb30" (UID: "520c14a3-4b91-4cfc-b3b3-c72a0b92fb30"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:59:22 crc kubenswrapper[4520]: I0130 06:59:22.379378 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/520c14a3-4b91-4cfc-b3b3-c72a0b92fb30-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "520c14a3-4b91-4cfc-b3b3-c72a0b92fb30" (UID: "520c14a3-4b91-4cfc-b3b3-c72a0b92fb30"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:59:22 crc kubenswrapper[4520]: I0130 06:59:22.379396 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/520c14a3-4b91-4cfc-b3b3-c72a0b92fb30-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "520c14a3-4b91-4cfc-b3b3-c72a0b92fb30" (UID: "520c14a3-4b91-4cfc-b3b3-c72a0b92fb30"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:59:22 crc kubenswrapper[4520]: I0130 06:59:22.379411 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/520c14a3-4b91-4cfc-b3b3-c72a0b92fb30-var-run" (OuterVolumeSpecName: "var-run") pod "520c14a3-4b91-4cfc-b3b3-c72a0b92fb30" (UID: "520c14a3-4b91-4cfc-b3b3-c72a0b92fb30"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:59:22 crc kubenswrapper[4520]: I0130 06:59:22.383555 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/520c14a3-4b91-4cfc-b3b3-c72a0b92fb30-kube-api-access-bvt99" (OuterVolumeSpecName: "kube-api-access-bvt99") pod "520c14a3-4b91-4cfc-b3b3-c72a0b92fb30" (UID: "520c14a3-4b91-4cfc-b3b3-c72a0b92fb30"). InnerVolumeSpecName "kube-api-access-bvt99". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:59:22 crc kubenswrapper[4520]: I0130 06:59:22.439036 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xrbrq-config-524v5" event={"ID":"520c14a3-4b91-4cfc-b3b3-c72a0b92fb30","Type":"ContainerDied","Data":"8f33d01dd0855b8076b03f3539fb970771186e218ddd311526e0bd629a001c80"} Jan 30 06:59:22 crc kubenswrapper[4520]: I0130 06:59:22.439095 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f33d01dd0855b8076b03f3539fb970771186e218ddd311526e0bd629a001c80" Jan 30 06:59:22 crc kubenswrapper[4520]: I0130 06:59:22.439056 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xrbrq-config-524v5" Jan 30 06:59:22 crc kubenswrapper[4520]: I0130 06:59:22.445729 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184","Type":"ContainerStarted","Data":"191e311e5049d7a75ccb50ab93e9140e570a18bcb388d44b938b80045e61ff7c"} Jan 30 06:59:22 crc kubenswrapper[4520]: I0130 06:59:22.445930 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 06:59:22 crc kubenswrapper[4520]: I0130 06:59:22.449380 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fc4abc0f-2827-4636-9942-342593697905","Type":"ContainerStarted","Data":"4f6a2217df55733c4a8753cc24d09c918992ad15e0a5e636694dd5b9b8c98f98"} Jan 30 06:59:22 crc kubenswrapper[4520]: I0130 06:59:22.449762 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:59:22 crc kubenswrapper[4520]: I0130 06:59:22.471826 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.237108103 podStartE2EDuration="1m14.471814999s" podCreationTimestamp="2026-01-30 06:58:08 +0000 UTC" firstStartedPulling="2026-01-30 06:58:10.231660064 +0000 UTC m=+803.860012245" lastFinishedPulling="2026-01-30 06:58:47.46636696 +0000 UTC m=+841.094719141" observedRunningTime="2026-01-30 06:59:22.460469589 +0000 UTC m=+876.088821769" watchObservedRunningTime="2026-01-30 06:59:22.471814999 +0000 UTC m=+876.100167179" Jan 30 06:59:22 crc kubenswrapper[4520]: I0130 06:59:22.479915 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvt99\" (UniqueName: \"kubernetes.io/projected/520c14a3-4b91-4cfc-b3b3-c72a0b92fb30-kube-api-access-bvt99\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:22 crc kubenswrapper[4520]: I0130 06:59:22.480227 4520 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/520c14a3-4b91-4cfc-b3b3-c72a0b92fb30-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:22 crc kubenswrapper[4520]: I0130 06:59:22.480240 4520 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/520c14a3-4b91-4cfc-b3b3-c72a0b92fb30-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:22 crc kubenswrapper[4520]: I0130 06:59:22.480250 4520 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/520c14a3-4b91-4cfc-b3b3-c72a0b92fb30-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:22 crc kubenswrapper[4520]: I0130 06:59:22.480260 4520 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/520c14a3-4b91-4cfc-b3b3-c72a0b92fb30-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:22 crc kubenswrapper[4520]: I0130 06:59:22.480271 4520 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/520c14a3-4b91-4cfc-b3b3-c72a0b92fb30-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:22 crc kubenswrapper[4520]: I0130 06:59:22.489902 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.512004101 podStartE2EDuration="1m14.489892695s" podCreationTimestamp="2026-01-30 06:58:08 +0000 UTC" firstStartedPulling="2026-01-30 06:58:10.505206993 +0000 UTC m=+804.133559174" lastFinishedPulling="2026-01-30 06:58:47.483095588 +0000 UTC m=+841.111447768" observedRunningTime="2026-01-30 06:59:22.486213559 +0000 UTC m=+876.114565740" watchObservedRunningTime="2026-01-30 06:59:22.489892695 +0000 UTC m=+876.118244867" Jan 30 06:59:22 crc kubenswrapper[4520]: I0130 06:59:22.634744 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 06:59:22 crc kubenswrapper[4520]: W0130 06:59:22.641635 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d0bd1d1_935d_458c_9cf8_c11455791a64.slice/crio-6e9b8eec248372d0cd0863e4070772ed70bbaff0a864cfe1c45b9ef87160ba30 WatchSource:0}: Error finding container 6e9b8eec248372d0cd0863e4070772ed70bbaff0a864cfe1c45b9ef87160ba30: Status 404 returned error can't find the container with id 6e9b8eec248372d0cd0863e4070772ed70bbaff0a864cfe1c45b9ef87160ba30 Jan 30 06:59:23 crc kubenswrapper[4520]: I0130 06:59:23.347549 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-xrbrq-config-524v5"] Jan 30 06:59:23 crc kubenswrapper[4520]: I0130 06:59:23.355712 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-xrbrq-config-524v5"] Jan 30 06:59:23 crc kubenswrapper[4520]: I0130 06:59:23.460152 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-fglmz" event={"ID":"ddd50154-e55a-4dae-ac2d-3528b94ff9f6","Type":"ContainerStarted","Data":"758237f69ab238cc777b2a0d458c5a1e73ee0b2b2600269700316f61c3cd66b4"} Jan 30 06:59:23 crc kubenswrapper[4520]: I0130 06:59:23.462682 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d0bd1d1-935d-458c-9cf8-c11455791a64","Type":"ContainerStarted","Data":"6e9b8eec248372d0cd0863e4070772ed70bbaff0a864cfe1c45b9ef87160ba30"} Jan 30 06:59:23 crc kubenswrapper[4520]: I0130 06:59:23.463694 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-xrbrq-config-l8jcf"] Jan 30 06:59:23 crc kubenswrapper[4520]: E0130 06:59:23.464149 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="520c14a3-4b91-4cfc-b3b3-c72a0b92fb30" containerName="ovn-config" Jan 30 06:59:23 crc kubenswrapper[4520]: I0130 06:59:23.464172 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="520c14a3-4b91-4cfc-b3b3-c72a0b92fb30" containerName="ovn-config" Jan 30 06:59:23 crc kubenswrapper[4520]: I0130 06:59:23.464375 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="520c14a3-4b91-4cfc-b3b3-c72a0b92fb30" containerName="ovn-config" Jan 30 06:59:23 crc kubenswrapper[4520]: I0130 06:59:23.467262 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xrbrq-config-l8jcf" Jan 30 06:59:23 crc kubenswrapper[4520]: I0130 06:59:23.469170 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 30 06:59:23 crc kubenswrapper[4520]: I0130 06:59:23.487934 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xrbrq-config-l8jcf"] Jan 30 06:59:23 crc kubenswrapper[4520]: I0130 06:59:23.501103 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-fglmz" podStartSLOduration=2.266661584 podStartE2EDuration="20.501076049s" podCreationTimestamp="2026-01-30 06:59:03 +0000 UTC" firstStartedPulling="2026-01-30 06:59:03.941623343 +0000 UTC m=+857.569975524" lastFinishedPulling="2026-01-30 06:59:22.176037808 +0000 UTC m=+875.804389989" observedRunningTime="2026-01-30 06:59:23.490867225 +0000 UTC m=+877.119219407" watchObservedRunningTime="2026-01-30 06:59:23.501076049 +0000 UTC m=+877.129428230" Jan 30 06:59:23 crc kubenswrapper[4520]: I0130 06:59:23.611707 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9e7b0ad6-c136-465e-8352-7896eb0a292a-var-run-ovn\") pod \"ovn-controller-xrbrq-config-l8jcf\" (UID: \"9e7b0ad6-c136-465e-8352-7896eb0a292a\") " pod="openstack/ovn-controller-xrbrq-config-l8jcf" Jan 30 06:59:23 crc kubenswrapper[4520]: I0130 06:59:23.611806 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fd8p\" (UniqueName: \"kubernetes.io/projected/9e7b0ad6-c136-465e-8352-7896eb0a292a-kube-api-access-6fd8p\") pod \"ovn-controller-xrbrq-config-l8jcf\" (UID: \"9e7b0ad6-c136-465e-8352-7896eb0a292a\") " pod="openstack/ovn-controller-xrbrq-config-l8jcf" Jan 30 06:59:23 crc kubenswrapper[4520]: I0130 06:59:23.611877 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9e7b0ad6-c136-465e-8352-7896eb0a292a-var-run\") pod \"ovn-controller-xrbrq-config-l8jcf\" (UID: \"9e7b0ad6-c136-465e-8352-7896eb0a292a\") " pod="openstack/ovn-controller-xrbrq-config-l8jcf" Jan 30 06:59:23 crc kubenswrapper[4520]: I0130 06:59:23.611950 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9e7b0ad6-c136-465e-8352-7896eb0a292a-var-log-ovn\") pod \"ovn-controller-xrbrq-config-l8jcf\" (UID: \"9e7b0ad6-c136-465e-8352-7896eb0a292a\") " pod="openstack/ovn-controller-xrbrq-config-l8jcf" Jan 30 06:59:23 crc kubenswrapper[4520]: I0130 06:59:23.613615 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e7b0ad6-c136-465e-8352-7896eb0a292a-scripts\") pod \"ovn-controller-xrbrq-config-l8jcf\" (UID: \"9e7b0ad6-c136-465e-8352-7896eb0a292a\") " pod="openstack/ovn-controller-xrbrq-config-l8jcf" Jan 30 06:59:23 crc kubenswrapper[4520]: I0130 06:59:23.614156 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9e7b0ad6-c136-465e-8352-7896eb0a292a-additional-scripts\") pod \"ovn-controller-xrbrq-config-l8jcf\" (UID: \"9e7b0ad6-c136-465e-8352-7896eb0a292a\") " pod="openstack/ovn-controller-xrbrq-config-l8jcf" Jan 30 06:59:23 crc kubenswrapper[4520]: I0130 06:59:23.717096 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9e7b0ad6-c136-465e-8352-7896eb0a292a-var-run-ovn\") pod \"ovn-controller-xrbrq-config-l8jcf\" (UID: \"9e7b0ad6-c136-465e-8352-7896eb0a292a\") " pod="openstack/ovn-controller-xrbrq-config-l8jcf" Jan 30 06:59:23 crc kubenswrapper[4520]: I0130 06:59:23.717222 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fd8p\" (UniqueName: \"kubernetes.io/projected/9e7b0ad6-c136-465e-8352-7896eb0a292a-kube-api-access-6fd8p\") pod \"ovn-controller-xrbrq-config-l8jcf\" (UID: \"9e7b0ad6-c136-465e-8352-7896eb0a292a\") " pod="openstack/ovn-controller-xrbrq-config-l8jcf" Jan 30 06:59:23 crc kubenswrapper[4520]: I0130 06:59:23.717268 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9e7b0ad6-c136-465e-8352-7896eb0a292a-var-run\") pod \"ovn-controller-xrbrq-config-l8jcf\" (UID: \"9e7b0ad6-c136-465e-8352-7896eb0a292a\") " pod="openstack/ovn-controller-xrbrq-config-l8jcf" Jan 30 06:59:23 crc kubenswrapper[4520]: I0130 06:59:23.717404 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9e7b0ad6-c136-465e-8352-7896eb0a292a-var-run-ovn\") pod \"ovn-controller-xrbrq-config-l8jcf\" (UID: \"9e7b0ad6-c136-465e-8352-7896eb0a292a\") " pod="openstack/ovn-controller-xrbrq-config-l8jcf" Jan 30 06:59:23 crc kubenswrapper[4520]: I0130 06:59:23.717453 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9e7b0ad6-c136-465e-8352-7896eb0a292a-var-run\") pod \"ovn-controller-xrbrq-config-l8jcf\" (UID: \"9e7b0ad6-c136-465e-8352-7896eb0a292a\") " pod="openstack/ovn-controller-xrbrq-config-l8jcf" Jan 30 06:59:23 crc kubenswrapper[4520]: I0130 06:59:23.717346 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9e7b0ad6-c136-465e-8352-7896eb0a292a-var-log-ovn\") pod \"ovn-controller-xrbrq-config-l8jcf\" (UID: \"9e7b0ad6-c136-465e-8352-7896eb0a292a\") " pod="openstack/ovn-controller-xrbrq-config-l8jcf" Jan 30 06:59:23 crc kubenswrapper[4520]: I0130 06:59:23.717575 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9e7b0ad6-c136-465e-8352-7896eb0a292a-var-log-ovn\") pod \"ovn-controller-xrbrq-config-l8jcf\" (UID: \"9e7b0ad6-c136-465e-8352-7896eb0a292a\") " pod="openstack/ovn-controller-xrbrq-config-l8jcf" Jan 30 06:59:23 crc kubenswrapper[4520]: I0130 06:59:23.717696 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e7b0ad6-c136-465e-8352-7896eb0a292a-scripts\") pod \"ovn-controller-xrbrq-config-l8jcf\" (UID: \"9e7b0ad6-c136-465e-8352-7896eb0a292a\") " pod="openstack/ovn-controller-xrbrq-config-l8jcf" Jan 30 06:59:23 crc kubenswrapper[4520]: I0130 06:59:23.717794 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9e7b0ad6-c136-465e-8352-7896eb0a292a-additional-scripts\") pod \"ovn-controller-xrbrq-config-l8jcf\" (UID: \"9e7b0ad6-c136-465e-8352-7896eb0a292a\") " pod="openstack/ovn-controller-xrbrq-config-l8jcf" Jan 30 06:59:23 crc kubenswrapper[4520]: I0130 06:59:23.718551 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9e7b0ad6-c136-465e-8352-7896eb0a292a-additional-scripts\") pod \"ovn-controller-xrbrq-config-l8jcf\" (UID: \"9e7b0ad6-c136-465e-8352-7896eb0a292a\") " pod="openstack/ovn-controller-xrbrq-config-l8jcf" Jan 30 06:59:23 crc kubenswrapper[4520]: I0130 06:59:23.719659 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e7b0ad6-c136-465e-8352-7896eb0a292a-scripts\") pod \"ovn-controller-xrbrq-config-l8jcf\" (UID: \"9e7b0ad6-c136-465e-8352-7896eb0a292a\") " pod="openstack/ovn-controller-xrbrq-config-l8jcf" Jan 30 06:59:23 crc kubenswrapper[4520]: I0130 06:59:23.747775 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fd8p\" (UniqueName: \"kubernetes.io/projected/9e7b0ad6-c136-465e-8352-7896eb0a292a-kube-api-access-6fd8p\") pod \"ovn-controller-xrbrq-config-l8jcf\" (UID: \"9e7b0ad6-c136-465e-8352-7896eb0a292a\") " pod="openstack/ovn-controller-xrbrq-config-l8jcf" Jan 30 06:59:23 crc kubenswrapper[4520]: I0130 06:59:23.787862 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xrbrq-config-l8jcf" Jan 30 06:59:24 crc kubenswrapper[4520]: I0130 06:59:24.219132 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xrbrq-config-l8jcf"] Jan 30 06:59:24 crc kubenswrapper[4520]: I0130 06:59:24.696683 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="520c14a3-4b91-4cfc-b3b3-c72a0b92fb30" path="/var/lib/kubelet/pods/520c14a3-4b91-4cfc-b3b3-c72a0b92fb30/volumes" Jan 30 06:59:25 crc kubenswrapper[4520]: I0130 06:59:25.478008 4520 generic.go:334] "Generic (PLEG): container finished" podID="9e7b0ad6-c136-465e-8352-7896eb0a292a" containerID="9d869e019e8f61d5527214121ef9e7ee2f1b3c059151bd1981be27b48d7dd44f" exitCode=0 Jan 30 06:59:25 crc kubenswrapper[4520]: I0130 06:59:25.478163 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xrbrq-config-l8jcf" event={"ID":"9e7b0ad6-c136-465e-8352-7896eb0a292a","Type":"ContainerDied","Data":"9d869e019e8f61d5527214121ef9e7ee2f1b3c059151bd1981be27b48d7dd44f"} Jan 30 06:59:25 crc kubenswrapper[4520]: I0130 06:59:25.478283 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xrbrq-config-l8jcf" event={"ID":"9e7b0ad6-c136-465e-8352-7896eb0a292a","Type":"ContainerStarted","Data":"5721d92091ade446959ea908c5aad564c1deeba5a9e1507a8bcc5cd0e3ebb88a"} Jan 30 06:59:25 crc kubenswrapper[4520]: I0130 06:59:25.480739 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d0bd1d1-935d-458c-9cf8-c11455791a64","Type":"ContainerStarted","Data":"c21a3e52be90f87c81bf4a82bcbf06778b8d58037f2c3982762a8c31ec70e3f7"} Jan 30 06:59:25 crc kubenswrapper[4520]: I0130 06:59:25.480770 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d0bd1d1-935d-458c-9cf8-c11455791a64","Type":"ContainerStarted","Data":"7e49bd5f10413daaef4094500c119df1b94cb6a2337306739f4734f0791beda6"} Jan 30 06:59:25 crc kubenswrapper[4520]: I0130 06:59:25.480784 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d0bd1d1-935d-458c-9cf8-c11455791a64","Type":"ContainerStarted","Data":"871f3c5caafaa0ff5c8f06afbbb5ab2821aa4191945e40464ca046b147d87778"} Jan 30 06:59:25 crc kubenswrapper[4520]: I0130 06:59:25.480794 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d0bd1d1-935d-458c-9cf8-c11455791a64","Type":"ContainerStarted","Data":"ede9061ee76e6c04dd2c6ee7a8606f2b322749943a8407164be013a9750ff18e"} Jan 30 06:59:26 crc kubenswrapper[4520]: I0130 06:59:26.795002 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xrbrq-config-l8jcf" Jan 30 06:59:26 crc kubenswrapper[4520]: I0130 06:59:26.987993 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fd8p\" (UniqueName: \"kubernetes.io/projected/9e7b0ad6-c136-465e-8352-7896eb0a292a-kube-api-access-6fd8p\") pod \"9e7b0ad6-c136-465e-8352-7896eb0a292a\" (UID: \"9e7b0ad6-c136-465e-8352-7896eb0a292a\") " Jan 30 06:59:26 crc kubenswrapper[4520]: I0130 06:59:26.988177 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9e7b0ad6-c136-465e-8352-7896eb0a292a-var-log-ovn\") pod \"9e7b0ad6-c136-465e-8352-7896eb0a292a\" (UID: \"9e7b0ad6-c136-465e-8352-7896eb0a292a\") " Jan 30 06:59:26 crc kubenswrapper[4520]: I0130 06:59:26.988255 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9e7b0ad6-c136-465e-8352-7896eb0a292a-var-run\") pod \"9e7b0ad6-c136-465e-8352-7896eb0a292a\" (UID: \"9e7b0ad6-c136-465e-8352-7896eb0a292a\") " Jan 30 06:59:26 crc kubenswrapper[4520]: I0130 06:59:26.988297 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9e7b0ad6-c136-465e-8352-7896eb0a292a-additional-scripts\") pod \"9e7b0ad6-c136-465e-8352-7896eb0a292a\" (UID: \"9e7b0ad6-c136-465e-8352-7896eb0a292a\") " Jan 30 06:59:26 crc kubenswrapper[4520]: I0130 06:59:26.988413 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e7b0ad6-c136-465e-8352-7896eb0a292a-scripts\") pod \"9e7b0ad6-c136-465e-8352-7896eb0a292a\" (UID: \"9e7b0ad6-c136-465e-8352-7896eb0a292a\") " Jan 30 06:59:26 crc kubenswrapper[4520]: I0130 06:59:26.988535 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9e7b0ad6-c136-465e-8352-7896eb0a292a-var-run-ovn\") pod \"9e7b0ad6-c136-465e-8352-7896eb0a292a\" (UID: \"9e7b0ad6-c136-465e-8352-7896eb0a292a\") " Jan 30 06:59:26 crc kubenswrapper[4520]: I0130 06:59:26.988649 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e7b0ad6-c136-465e-8352-7896eb0a292a-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "9e7b0ad6-c136-465e-8352-7896eb0a292a" (UID: "9e7b0ad6-c136-465e-8352-7896eb0a292a"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:59:26 crc kubenswrapper[4520]: I0130 06:59:26.989161 4520 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9e7b0ad6-c136-465e-8352-7896eb0a292a-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:26 crc kubenswrapper[4520]: I0130 06:59:26.989211 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e7b0ad6-c136-465e-8352-7896eb0a292a-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "9e7b0ad6-c136-465e-8352-7896eb0a292a" (UID: "9e7b0ad6-c136-465e-8352-7896eb0a292a"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:59:26 crc kubenswrapper[4520]: I0130 06:59:26.989241 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e7b0ad6-c136-465e-8352-7896eb0a292a-var-run" (OuterVolumeSpecName: "var-run") pod "9e7b0ad6-c136-465e-8352-7896eb0a292a" (UID: "9e7b0ad6-c136-465e-8352-7896eb0a292a"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:59:26 crc kubenswrapper[4520]: I0130 06:59:26.989822 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e7b0ad6-c136-465e-8352-7896eb0a292a-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "9e7b0ad6-c136-465e-8352-7896eb0a292a" (UID: "9e7b0ad6-c136-465e-8352-7896eb0a292a"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:59:26 crc kubenswrapper[4520]: I0130 06:59:26.990717 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e7b0ad6-c136-465e-8352-7896eb0a292a-scripts" (OuterVolumeSpecName: "scripts") pod "9e7b0ad6-c136-465e-8352-7896eb0a292a" (UID: "9e7b0ad6-c136-465e-8352-7896eb0a292a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:59:27 crc kubenswrapper[4520]: I0130 06:59:27.017888 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e7b0ad6-c136-465e-8352-7896eb0a292a-kube-api-access-6fd8p" (OuterVolumeSpecName: "kube-api-access-6fd8p") pod "9e7b0ad6-c136-465e-8352-7896eb0a292a" (UID: "9e7b0ad6-c136-465e-8352-7896eb0a292a"). InnerVolumeSpecName "kube-api-access-6fd8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:59:27 crc kubenswrapper[4520]: I0130 06:59:27.090413 4520 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e7b0ad6-c136-465e-8352-7896eb0a292a-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:27 crc kubenswrapper[4520]: I0130 06:59:27.090442 4520 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9e7b0ad6-c136-465e-8352-7896eb0a292a-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:27 crc kubenswrapper[4520]: I0130 06:59:27.090452 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fd8p\" (UniqueName: \"kubernetes.io/projected/9e7b0ad6-c136-465e-8352-7896eb0a292a-kube-api-access-6fd8p\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:27 crc kubenswrapper[4520]: I0130 06:59:27.090462 4520 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9e7b0ad6-c136-465e-8352-7896eb0a292a-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:27 crc kubenswrapper[4520]: I0130 06:59:27.090469 4520 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9e7b0ad6-c136-465e-8352-7896eb0a292a-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:27 crc kubenswrapper[4520]: I0130 06:59:27.496286 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xrbrq-config-l8jcf" event={"ID":"9e7b0ad6-c136-465e-8352-7896eb0a292a","Type":"ContainerDied","Data":"5721d92091ade446959ea908c5aad564c1deeba5a9e1507a8bcc5cd0e3ebb88a"} Jan 30 06:59:27 crc kubenswrapper[4520]: I0130 06:59:27.496603 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5721d92091ade446959ea908c5aad564c1deeba5a9e1507a8bcc5cd0e3ebb88a" Jan 30 06:59:27 crc kubenswrapper[4520]: I0130 06:59:27.496316 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xrbrq-config-l8jcf" Jan 30 06:59:27 crc kubenswrapper[4520]: I0130 06:59:27.499688 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d0bd1d1-935d-458c-9cf8-c11455791a64","Type":"ContainerStarted","Data":"1e1602e72370be680ee2983a9dc7b6f383f8c9e1f024b2ecab891c89439d02a0"} Jan 30 06:59:27 crc kubenswrapper[4520]: I0130 06:59:27.499812 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d0bd1d1-935d-458c-9cf8-c11455791a64","Type":"ContainerStarted","Data":"eb0f268414141a817122b25fdb9084d1745f57711c352c97ff1a2c381f231496"} Jan 30 06:59:27 crc kubenswrapper[4520]: I0130 06:59:27.874994 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-xrbrq-config-l8jcf"] Jan 30 06:59:27 crc kubenswrapper[4520]: I0130 06:59:27.880301 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-xrbrq-config-l8jcf"] Jan 30 06:59:28 crc kubenswrapper[4520]: I0130 06:59:28.510206 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d0bd1d1-935d-458c-9cf8-c11455791a64","Type":"ContainerStarted","Data":"19b29ab96d50269bd5d425a22592465a0ced95ba73dc8d3f4a5d2789df46d478"} Jan 30 06:59:28 crc kubenswrapper[4520]: I0130 06:59:28.510351 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d0bd1d1-935d-458c-9cf8-c11455791a64","Type":"ContainerStarted","Data":"76c0e25ef3faf0d38ef52cd9d6122307bc52b142adc2573ddb745d7e25711fa8"} Jan 30 06:59:28 crc kubenswrapper[4520]: I0130 06:59:28.511794 4520 generic.go:334] "Generic (PLEG): container finished" podID="ddd50154-e55a-4dae-ac2d-3528b94ff9f6" containerID="758237f69ab238cc777b2a0d458c5a1e73ee0b2b2600269700316f61c3cd66b4" exitCode=0 Jan 30 06:59:28 crc kubenswrapper[4520]: I0130 06:59:28.512869 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-fglmz" event={"ID":"ddd50154-e55a-4dae-ac2d-3528b94ff9f6","Type":"ContainerDied","Data":"758237f69ab238cc777b2a0d458c5a1e73ee0b2b2600269700316f61c3cd66b4"} Jan 30 06:59:28 crc kubenswrapper[4520]: I0130 06:59:28.693814 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e7b0ad6-c136-465e-8352-7896eb0a292a" path="/var/lib/kubelet/pods/9e7b0ad6-c136-465e-8352-7896eb0a292a/volumes" Jan 30 06:59:29 crc kubenswrapper[4520]: I0130 06:59:29.873298 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-fglmz" Jan 30 06:59:30 crc kubenswrapper[4520]: I0130 06:59:30.038202 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddd50154-e55a-4dae-ac2d-3528b94ff9f6-combined-ca-bundle\") pod \"ddd50154-e55a-4dae-ac2d-3528b94ff9f6\" (UID: \"ddd50154-e55a-4dae-ac2d-3528b94ff9f6\") " Jan 30 06:59:30 crc kubenswrapper[4520]: I0130 06:59:30.038365 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ddd50154-e55a-4dae-ac2d-3528b94ff9f6-db-sync-config-data\") pod \"ddd50154-e55a-4dae-ac2d-3528b94ff9f6\" (UID: \"ddd50154-e55a-4dae-ac2d-3528b94ff9f6\") " Jan 30 06:59:30 crc kubenswrapper[4520]: I0130 06:59:30.038602 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbszc\" (UniqueName: \"kubernetes.io/projected/ddd50154-e55a-4dae-ac2d-3528b94ff9f6-kube-api-access-gbszc\") pod \"ddd50154-e55a-4dae-ac2d-3528b94ff9f6\" (UID: \"ddd50154-e55a-4dae-ac2d-3528b94ff9f6\") " Jan 30 06:59:30 crc kubenswrapper[4520]: I0130 06:59:30.038713 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddd50154-e55a-4dae-ac2d-3528b94ff9f6-config-data\") pod \"ddd50154-e55a-4dae-ac2d-3528b94ff9f6\" (UID: \"ddd50154-e55a-4dae-ac2d-3528b94ff9f6\") " Jan 30 06:59:30 crc kubenswrapper[4520]: I0130 06:59:30.045884 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddd50154-e55a-4dae-ac2d-3528b94ff9f6-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ddd50154-e55a-4dae-ac2d-3528b94ff9f6" (UID: "ddd50154-e55a-4dae-ac2d-3528b94ff9f6"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:59:30 crc kubenswrapper[4520]: I0130 06:59:30.050858 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddd50154-e55a-4dae-ac2d-3528b94ff9f6-kube-api-access-gbszc" (OuterVolumeSpecName: "kube-api-access-gbszc") pod "ddd50154-e55a-4dae-ac2d-3528b94ff9f6" (UID: "ddd50154-e55a-4dae-ac2d-3528b94ff9f6"). InnerVolumeSpecName "kube-api-access-gbszc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:59:30 crc kubenswrapper[4520]: I0130 06:59:30.064659 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddd50154-e55a-4dae-ac2d-3528b94ff9f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ddd50154-e55a-4dae-ac2d-3528b94ff9f6" (UID: "ddd50154-e55a-4dae-ac2d-3528b94ff9f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:59:30 crc kubenswrapper[4520]: I0130 06:59:30.090021 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddd50154-e55a-4dae-ac2d-3528b94ff9f6-config-data" (OuterVolumeSpecName: "config-data") pod "ddd50154-e55a-4dae-ac2d-3528b94ff9f6" (UID: "ddd50154-e55a-4dae-ac2d-3528b94ff9f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:59:30 crc kubenswrapper[4520]: I0130 06:59:30.141574 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbszc\" (UniqueName: \"kubernetes.io/projected/ddd50154-e55a-4dae-ac2d-3528b94ff9f6-kube-api-access-gbszc\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:30 crc kubenswrapper[4520]: I0130 06:59:30.141629 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddd50154-e55a-4dae-ac2d-3528b94ff9f6-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:30 crc kubenswrapper[4520]: I0130 06:59:30.141667 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddd50154-e55a-4dae-ac2d-3528b94ff9f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:30 crc kubenswrapper[4520]: I0130 06:59:30.141678 4520 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ddd50154-e55a-4dae-ac2d-3528b94ff9f6-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:30 crc kubenswrapper[4520]: I0130 06:59:30.527355 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-fglmz" event={"ID":"ddd50154-e55a-4dae-ac2d-3528b94ff9f6","Type":"ContainerDied","Data":"dfbfe19373dfecb2f9195f81a13443981243ad194b4682e3f57304548f3e2bd5"} Jan 30 06:59:30 crc kubenswrapper[4520]: I0130 06:59:30.527390 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-fglmz" Jan 30 06:59:30 crc kubenswrapper[4520]: I0130 06:59:30.527406 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfbfe19373dfecb2f9195f81a13443981243ad194b4682e3f57304548f3e2bd5" Jan 30 06:59:30 crc kubenswrapper[4520]: I0130 06:59:30.905922 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-667d99665-jh7qq"] Jan 30 06:59:30 crc kubenswrapper[4520]: E0130 06:59:30.909886 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddd50154-e55a-4dae-ac2d-3528b94ff9f6" containerName="glance-db-sync" Jan 30 06:59:30 crc kubenswrapper[4520]: I0130 06:59:30.909967 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddd50154-e55a-4dae-ac2d-3528b94ff9f6" containerName="glance-db-sync" Jan 30 06:59:30 crc kubenswrapper[4520]: E0130 06:59:30.910053 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e7b0ad6-c136-465e-8352-7896eb0a292a" containerName="ovn-config" Jan 30 06:59:30 crc kubenswrapper[4520]: I0130 06:59:30.910101 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e7b0ad6-c136-465e-8352-7896eb0a292a" containerName="ovn-config" Jan 30 06:59:30 crc kubenswrapper[4520]: I0130 06:59:30.910452 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddd50154-e55a-4dae-ac2d-3528b94ff9f6" containerName="glance-db-sync" Jan 30 06:59:30 crc kubenswrapper[4520]: I0130 06:59:30.913015 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e7b0ad6-c136-465e-8352-7896eb0a292a" containerName="ovn-config" Jan 30 06:59:30 crc kubenswrapper[4520]: I0130 06:59:30.915438 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-667d99665-jh7qq" Jan 30 06:59:30 crc kubenswrapper[4520]: I0130 06:59:30.919220 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-667d99665-jh7qq"] Jan 30 06:59:31 crc kubenswrapper[4520]: I0130 06:59:31.062111 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25580660-5d8e-46d0-938b-c9f3aba9b8d7-ovsdbserver-sb\") pod \"dnsmasq-dns-667d99665-jh7qq\" (UID: \"25580660-5d8e-46d0-938b-c9f3aba9b8d7\") " pod="openstack/dnsmasq-dns-667d99665-jh7qq" Jan 30 06:59:31 crc kubenswrapper[4520]: I0130 06:59:31.062691 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25580660-5d8e-46d0-938b-c9f3aba9b8d7-dns-svc\") pod \"dnsmasq-dns-667d99665-jh7qq\" (UID: \"25580660-5d8e-46d0-938b-c9f3aba9b8d7\") " pod="openstack/dnsmasq-dns-667d99665-jh7qq" Jan 30 06:59:31 crc kubenswrapper[4520]: I0130 06:59:31.062909 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25580660-5d8e-46d0-938b-c9f3aba9b8d7-ovsdbserver-nb\") pod \"dnsmasq-dns-667d99665-jh7qq\" (UID: \"25580660-5d8e-46d0-938b-c9f3aba9b8d7\") " pod="openstack/dnsmasq-dns-667d99665-jh7qq" Jan 30 06:59:31 crc kubenswrapper[4520]: I0130 06:59:31.063024 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25580660-5d8e-46d0-938b-c9f3aba9b8d7-config\") pod \"dnsmasq-dns-667d99665-jh7qq\" (UID: \"25580660-5d8e-46d0-938b-c9f3aba9b8d7\") " pod="openstack/dnsmasq-dns-667d99665-jh7qq" Jan 30 06:59:31 crc kubenswrapper[4520]: I0130 06:59:31.063153 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6qzp\" (UniqueName: \"kubernetes.io/projected/25580660-5d8e-46d0-938b-c9f3aba9b8d7-kube-api-access-w6qzp\") pod \"dnsmasq-dns-667d99665-jh7qq\" (UID: \"25580660-5d8e-46d0-938b-c9f3aba9b8d7\") " pod="openstack/dnsmasq-dns-667d99665-jh7qq" Jan 30 06:59:31 crc kubenswrapper[4520]: I0130 06:59:31.166079 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25580660-5d8e-46d0-938b-c9f3aba9b8d7-dns-svc\") pod \"dnsmasq-dns-667d99665-jh7qq\" (UID: \"25580660-5d8e-46d0-938b-c9f3aba9b8d7\") " pod="openstack/dnsmasq-dns-667d99665-jh7qq" Jan 30 06:59:31 crc kubenswrapper[4520]: I0130 06:59:31.166202 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25580660-5d8e-46d0-938b-c9f3aba9b8d7-ovsdbserver-nb\") pod \"dnsmasq-dns-667d99665-jh7qq\" (UID: \"25580660-5d8e-46d0-938b-c9f3aba9b8d7\") " pod="openstack/dnsmasq-dns-667d99665-jh7qq" Jan 30 06:59:31 crc kubenswrapper[4520]: I0130 06:59:31.166266 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25580660-5d8e-46d0-938b-c9f3aba9b8d7-config\") pod \"dnsmasq-dns-667d99665-jh7qq\" (UID: \"25580660-5d8e-46d0-938b-c9f3aba9b8d7\") " pod="openstack/dnsmasq-dns-667d99665-jh7qq" Jan 30 06:59:31 crc kubenswrapper[4520]: I0130 06:59:31.166318 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6qzp\" (UniqueName: \"kubernetes.io/projected/25580660-5d8e-46d0-938b-c9f3aba9b8d7-kube-api-access-w6qzp\") pod \"dnsmasq-dns-667d99665-jh7qq\" (UID: \"25580660-5d8e-46d0-938b-c9f3aba9b8d7\") " pod="openstack/dnsmasq-dns-667d99665-jh7qq" Jan 30 06:59:31 crc kubenswrapper[4520]: I0130 06:59:31.166360 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25580660-5d8e-46d0-938b-c9f3aba9b8d7-ovsdbserver-sb\") pod \"dnsmasq-dns-667d99665-jh7qq\" (UID: \"25580660-5d8e-46d0-938b-c9f3aba9b8d7\") " pod="openstack/dnsmasq-dns-667d99665-jh7qq" Jan 30 06:59:31 crc kubenswrapper[4520]: I0130 06:59:31.167337 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25580660-5d8e-46d0-938b-c9f3aba9b8d7-dns-svc\") pod \"dnsmasq-dns-667d99665-jh7qq\" (UID: \"25580660-5d8e-46d0-938b-c9f3aba9b8d7\") " pod="openstack/dnsmasq-dns-667d99665-jh7qq" Jan 30 06:59:31 crc kubenswrapper[4520]: I0130 06:59:31.167364 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25580660-5d8e-46d0-938b-c9f3aba9b8d7-ovsdbserver-nb\") pod \"dnsmasq-dns-667d99665-jh7qq\" (UID: \"25580660-5d8e-46d0-938b-c9f3aba9b8d7\") " pod="openstack/dnsmasq-dns-667d99665-jh7qq" Jan 30 06:59:31 crc kubenswrapper[4520]: I0130 06:59:31.167406 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25580660-5d8e-46d0-938b-c9f3aba9b8d7-ovsdbserver-sb\") pod \"dnsmasq-dns-667d99665-jh7qq\" (UID: \"25580660-5d8e-46d0-938b-c9f3aba9b8d7\") " pod="openstack/dnsmasq-dns-667d99665-jh7qq" Jan 30 06:59:31 crc kubenswrapper[4520]: I0130 06:59:31.167793 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25580660-5d8e-46d0-938b-c9f3aba9b8d7-config\") pod \"dnsmasq-dns-667d99665-jh7qq\" (UID: \"25580660-5d8e-46d0-938b-c9f3aba9b8d7\") " pod="openstack/dnsmasq-dns-667d99665-jh7qq" Jan 30 06:59:31 crc kubenswrapper[4520]: I0130 06:59:31.184300 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6qzp\" (UniqueName: \"kubernetes.io/projected/25580660-5d8e-46d0-938b-c9f3aba9b8d7-kube-api-access-w6qzp\") pod \"dnsmasq-dns-667d99665-jh7qq\" (UID: \"25580660-5d8e-46d0-938b-c9f3aba9b8d7\") " pod="openstack/dnsmasq-dns-667d99665-jh7qq" Jan 30 06:59:31 crc kubenswrapper[4520]: I0130 06:59:31.246393 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-667d99665-jh7qq" Jan 30 06:59:31 crc kubenswrapper[4520]: I0130 06:59:31.678634 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-667d99665-jh7qq"] Jan 30 06:59:32 crc kubenswrapper[4520]: I0130 06:59:32.563632 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d0bd1d1-935d-458c-9cf8-c11455791a64","Type":"ContainerStarted","Data":"db208b5b81d604c00686f2738700f6dcd2ca64726c05264580e3ef1405cfd404"} Jan 30 06:59:32 crc kubenswrapper[4520]: I0130 06:59:32.564015 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d0bd1d1-935d-458c-9cf8-c11455791a64","Type":"ContainerStarted","Data":"cbc0044fa9c18ed830329764361f7347c8926dfb31ee7bfc565fe570c5550344"} Jan 30 06:59:32 crc kubenswrapper[4520]: I0130 06:59:32.564033 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d0bd1d1-935d-458c-9cf8-c11455791a64","Type":"ContainerStarted","Data":"58b3b3eacb1dfa58678cb75d012f8e8e4be7c387c0dd14fe204a0949ab65bee8"} Jan 30 06:59:32 crc kubenswrapper[4520]: I0130 06:59:32.566082 4520 generic.go:334] "Generic (PLEG): container finished" podID="25580660-5d8e-46d0-938b-c9f3aba9b8d7" containerID="fe3f70adc3fb05bff4a043f9c7839541ef2497e57d317aa3c67bfaefd01948c6" exitCode=0 Jan 30 06:59:32 crc kubenswrapper[4520]: I0130 06:59:32.566110 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-667d99665-jh7qq" event={"ID":"25580660-5d8e-46d0-938b-c9f3aba9b8d7","Type":"ContainerDied","Data":"fe3f70adc3fb05bff4a043f9c7839541ef2497e57d317aa3c67bfaefd01948c6"} Jan 30 06:59:32 crc kubenswrapper[4520]: I0130 06:59:32.566130 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-667d99665-jh7qq" event={"ID":"25580660-5d8e-46d0-938b-c9f3aba9b8d7","Type":"ContainerStarted","Data":"7b2ef2d87c42e2044385a1e6db013ba33798c62804d322a17d99024b533b6988"} Jan 30 06:59:33 crc kubenswrapper[4520]: I0130 06:59:33.575379 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-667d99665-jh7qq" event={"ID":"25580660-5d8e-46d0-938b-c9f3aba9b8d7","Type":"ContainerStarted","Data":"93b15bb785b33a2c82abd7f3b969bf00a1a47cfc7dc8959ddaad344de3ccc778"} Jan 30 06:59:33 crc kubenswrapper[4520]: I0130 06:59:33.575785 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-667d99665-jh7qq" Jan 30 06:59:33 crc kubenswrapper[4520]: I0130 06:59:33.586208 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d0bd1d1-935d-458c-9cf8-c11455791a64","Type":"ContainerStarted","Data":"404bdaac17352f60fe4faf147b964c6679514a0cc1e1ded7fde32e75b923dc9e"} Jan 30 06:59:33 crc kubenswrapper[4520]: I0130 06:59:33.586264 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d0bd1d1-935d-458c-9cf8-c11455791a64","Type":"ContainerStarted","Data":"a96e415e77d4c81fa505bd3b08344f7cef5bc939c2e89b50023676cc7c1e8aef"} Jan 30 06:59:33 crc kubenswrapper[4520]: I0130 06:59:33.586279 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d0bd1d1-935d-458c-9cf8-c11455791a64","Type":"ContainerStarted","Data":"b3d42c98f02a1bcaab165e60a2f4e37a623ddd5918c88bbe60c04e344a3285f5"} Jan 30 06:59:33 crc kubenswrapper[4520]: I0130 06:59:33.586293 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d0bd1d1-935d-458c-9cf8-c11455791a64","Type":"ContainerStarted","Data":"62c76500ced4fb31a184015102febf2b6812eb09eaabc619ba56423cb60e0c70"} Jan 30 06:59:33 crc kubenswrapper[4520]: I0130 06:59:33.611435 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-667d99665-jh7qq" podStartSLOduration=3.61141468 podStartE2EDuration="3.61141468s" podCreationTimestamp="2026-01-30 06:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:59:33.607419449 +0000 UTC m=+887.235771629" watchObservedRunningTime="2026-01-30 06:59:33.61141468 +0000 UTC m=+887.239766861" Jan 30 06:59:33 crc kubenswrapper[4520]: I0130 06:59:33.662900 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=40.285858382 podStartE2EDuration="49.662877022s" podCreationTimestamp="2026-01-30 06:58:44 +0000 UTC" firstStartedPulling="2026-01-30 06:59:22.643940675 +0000 UTC m=+876.272292857" lastFinishedPulling="2026-01-30 06:59:32.020959326 +0000 UTC m=+885.649311497" observedRunningTime="2026-01-30 06:59:33.648026753 +0000 UTC m=+887.276378934" watchObservedRunningTime="2026-01-30 06:59:33.662877022 +0000 UTC m=+887.291229204" Jan 30 06:59:34 crc kubenswrapper[4520]: I0130 06:59:34.016489 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-667d99665-jh7qq"] Jan 30 06:59:34 crc kubenswrapper[4520]: I0130 06:59:34.051654 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bcc75fb87-pcx4j"] Jan 30 06:59:34 crc kubenswrapper[4520]: I0130 06:59:34.053001 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bcc75fb87-pcx4j" Jan 30 06:59:34 crc kubenswrapper[4520]: I0130 06:59:34.057828 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 30 06:59:34 crc kubenswrapper[4520]: I0130 06:59:34.070865 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bcc75fb87-pcx4j"] Jan 30 06:59:34 crc kubenswrapper[4520]: I0130 06:59:34.128247 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/20e16608-f957-4e8c-b9d2-63718bd0342e-dns-swift-storage-0\") pod \"dnsmasq-dns-bcc75fb87-pcx4j\" (UID: \"20e16608-f957-4e8c-b9d2-63718bd0342e\") " pod="openstack/dnsmasq-dns-bcc75fb87-pcx4j" Jan 30 06:59:34 crc kubenswrapper[4520]: I0130 06:59:34.128558 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20e16608-f957-4e8c-b9d2-63718bd0342e-config\") pod \"dnsmasq-dns-bcc75fb87-pcx4j\" (UID: \"20e16608-f957-4e8c-b9d2-63718bd0342e\") " pod="openstack/dnsmasq-dns-bcc75fb87-pcx4j" Jan 30 06:59:34 crc kubenswrapper[4520]: I0130 06:59:34.128689 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcbj4\" (UniqueName: \"kubernetes.io/projected/20e16608-f957-4e8c-b9d2-63718bd0342e-kube-api-access-wcbj4\") pod \"dnsmasq-dns-bcc75fb87-pcx4j\" (UID: \"20e16608-f957-4e8c-b9d2-63718bd0342e\") " pod="openstack/dnsmasq-dns-bcc75fb87-pcx4j" Jan 30 06:59:34 crc kubenswrapper[4520]: I0130 06:59:34.128795 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20e16608-f957-4e8c-b9d2-63718bd0342e-ovsdbserver-nb\") pod \"dnsmasq-dns-bcc75fb87-pcx4j\" (UID: \"20e16608-f957-4e8c-b9d2-63718bd0342e\") " pod="openstack/dnsmasq-dns-bcc75fb87-pcx4j" Jan 30 06:59:34 crc kubenswrapper[4520]: I0130 06:59:34.128880 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20e16608-f957-4e8c-b9d2-63718bd0342e-ovsdbserver-sb\") pod \"dnsmasq-dns-bcc75fb87-pcx4j\" (UID: \"20e16608-f957-4e8c-b9d2-63718bd0342e\") " pod="openstack/dnsmasq-dns-bcc75fb87-pcx4j" Jan 30 06:59:34 crc kubenswrapper[4520]: I0130 06:59:34.128972 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20e16608-f957-4e8c-b9d2-63718bd0342e-dns-svc\") pod \"dnsmasq-dns-bcc75fb87-pcx4j\" (UID: \"20e16608-f957-4e8c-b9d2-63718bd0342e\") " pod="openstack/dnsmasq-dns-bcc75fb87-pcx4j" Jan 30 06:59:34 crc kubenswrapper[4520]: I0130 06:59:34.230914 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20e16608-f957-4e8c-b9d2-63718bd0342e-config\") pod \"dnsmasq-dns-bcc75fb87-pcx4j\" (UID: \"20e16608-f957-4e8c-b9d2-63718bd0342e\") " pod="openstack/dnsmasq-dns-bcc75fb87-pcx4j" Jan 30 06:59:34 crc kubenswrapper[4520]: I0130 06:59:34.230991 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcbj4\" (UniqueName: \"kubernetes.io/projected/20e16608-f957-4e8c-b9d2-63718bd0342e-kube-api-access-wcbj4\") pod \"dnsmasq-dns-bcc75fb87-pcx4j\" (UID: \"20e16608-f957-4e8c-b9d2-63718bd0342e\") " pod="openstack/dnsmasq-dns-bcc75fb87-pcx4j" Jan 30 06:59:34 crc kubenswrapper[4520]: I0130 06:59:34.231050 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20e16608-f957-4e8c-b9d2-63718bd0342e-ovsdbserver-nb\") pod \"dnsmasq-dns-bcc75fb87-pcx4j\" (UID: \"20e16608-f957-4e8c-b9d2-63718bd0342e\") " pod="openstack/dnsmasq-dns-bcc75fb87-pcx4j" Jan 30 06:59:34 crc kubenswrapper[4520]: I0130 06:59:34.231079 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20e16608-f957-4e8c-b9d2-63718bd0342e-ovsdbserver-sb\") pod \"dnsmasq-dns-bcc75fb87-pcx4j\" (UID: \"20e16608-f957-4e8c-b9d2-63718bd0342e\") " pod="openstack/dnsmasq-dns-bcc75fb87-pcx4j" Jan 30 06:59:34 crc kubenswrapper[4520]: I0130 06:59:34.231103 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20e16608-f957-4e8c-b9d2-63718bd0342e-dns-svc\") pod \"dnsmasq-dns-bcc75fb87-pcx4j\" (UID: \"20e16608-f957-4e8c-b9d2-63718bd0342e\") " pod="openstack/dnsmasq-dns-bcc75fb87-pcx4j" Jan 30 06:59:34 crc kubenswrapper[4520]: I0130 06:59:34.231193 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/20e16608-f957-4e8c-b9d2-63718bd0342e-dns-swift-storage-0\") pod \"dnsmasq-dns-bcc75fb87-pcx4j\" (UID: \"20e16608-f957-4e8c-b9d2-63718bd0342e\") " pod="openstack/dnsmasq-dns-bcc75fb87-pcx4j" Jan 30 06:59:34 crc kubenswrapper[4520]: I0130 06:59:34.231976 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20e16608-f957-4e8c-b9d2-63718bd0342e-config\") pod \"dnsmasq-dns-bcc75fb87-pcx4j\" (UID: \"20e16608-f957-4e8c-b9d2-63718bd0342e\") " pod="openstack/dnsmasq-dns-bcc75fb87-pcx4j" Jan 30 06:59:34 crc kubenswrapper[4520]: I0130 06:59:34.232074 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/20e16608-f957-4e8c-b9d2-63718bd0342e-dns-swift-storage-0\") pod \"dnsmasq-dns-bcc75fb87-pcx4j\" (UID: \"20e16608-f957-4e8c-b9d2-63718bd0342e\") " pod="openstack/dnsmasq-dns-bcc75fb87-pcx4j" Jan 30 06:59:34 crc kubenswrapper[4520]: I0130 06:59:34.232452 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20e16608-f957-4e8c-b9d2-63718bd0342e-ovsdbserver-sb\") pod \"dnsmasq-dns-bcc75fb87-pcx4j\" (UID: \"20e16608-f957-4e8c-b9d2-63718bd0342e\") " pod="openstack/dnsmasq-dns-bcc75fb87-pcx4j" Jan 30 06:59:34 crc kubenswrapper[4520]: I0130 06:59:34.232577 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20e16608-f957-4e8c-b9d2-63718bd0342e-dns-svc\") pod \"dnsmasq-dns-bcc75fb87-pcx4j\" (UID: \"20e16608-f957-4e8c-b9d2-63718bd0342e\") " pod="openstack/dnsmasq-dns-bcc75fb87-pcx4j" Jan 30 06:59:34 crc kubenswrapper[4520]: I0130 06:59:34.233384 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20e16608-f957-4e8c-b9d2-63718bd0342e-ovsdbserver-nb\") pod \"dnsmasq-dns-bcc75fb87-pcx4j\" (UID: \"20e16608-f957-4e8c-b9d2-63718bd0342e\") " pod="openstack/dnsmasq-dns-bcc75fb87-pcx4j" Jan 30 06:59:34 crc kubenswrapper[4520]: I0130 06:59:34.247839 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcbj4\" (UniqueName: \"kubernetes.io/projected/20e16608-f957-4e8c-b9d2-63718bd0342e-kube-api-access-wcbj4\") pod \"dnsmasq-dns-bcc75fb87-pcx4j\" (UID: \"20e16608-f957-4e8c-b9d2-63718bd0342e\") " pod="openstack/dnsmasq-dns-bcc75fb87-pcx4j" Jan 30 06:59:34 crc kubenswrapper[4520]: I0130 06:59:34.368334 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bcc75fb87-pcx4j" Jan 30 06:59:34 crc kubenswrapper[4520]: I0130 06:59:34.634304 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bcc75fb87-pcx4j"] Jan 30 06:59:35 crc kubenswrapper[4520]: I0130 06:59:35.608613 4520 generic.go:334] "Generic (PLEG): container finished" podID="20e16608-f957-4e8c-b9d2-63718bd0342e" containerID="e88ba02bcc74ae1575e631bf9974f9e087e46d874e90376a98d1259f1ac2672d" exitCode=0 Jan 30 06:59:35 crc kubenswrapper[4520]: I0130 06:59:35.608726 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bcc75fb87-pcx4j" event={"ID":"20e16608-f957-4e8c-b9d2-63718bd0342e","Type":"ContainerDied","Data":"e88ba02bcc74ae1575e631bf9974f9e087e46d874e90376a98d1259f1ac2672d"} Jan 30 06:59:35 crc kubenswrapper[4520]: I0130 06:59:35.609219 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bcc75fb87-pcx4j" event={"ID":"20e16608-f957-4e8c-b9d2-63718bd0342e","Type":"ContainerStarted","Data":"4aeb6a3ae6877dba6f66cd77b55982510470463686dc355b36c6361e74e63019"} Jan 30 06:59:35 crc kubenswrapper[4520]: I0130 06:59:35.609319 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-667d99665-jh7qq" podUID="25580660-5d8e-46d0-938b-c9f3aba9b8d7" containerName="dnsmasq-dns" containerID="cri-o://93b15bb785b33a2c82abd7f3b969bf00a1a47cfc7dc8959ddaad344de3ccc778" gracePeriod=10 Jan 30 06:59:35 crc kubenswrapper[4520]: I0130 06:59:35.911302 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-667d99665-jh7qq" Jan 30 06:59:36 crc kubenswrapper[4520]: I0130 06:59:36.066562 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25580660-5d8e-46d0-938b-c9f3aba9b8d7-dns-svc\") pod \"25580660-5d8e-46d0-938b-c9f3aba9b8d7\" (UID: \"25580660-5d8e-46d0-938b-c9f3aba9b8d7\") " Jan 30 06:59:36 crc kubenswrapper[4520]: I0130 06:59:36.066670 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25580660-5d8e-46d0-938b-c9f3aba9b8d7-config\") pod \"25580660-5d8e-46d0-938b-c9f3aba9b8d7\" (UID: \"25580660-5d8e-46d0-938b-c9f3aba9b8d7\") " Jan 30 06:59:36 crc kubenswrapper[4520]: I0130 06:59:36.066737 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25580660-5d8e-46d0-938b-c9f3aba9b8d7-ovsdbserver-sb\") pod \"25580660-5d8e-46d0-938b-c9f3aba9b8d7\" (UID: \"25580660-5d8e-46d0-938b-c9f3aba9b8d7\") " Jan 30 06:59:36 crc kubenswrapper[4520]: I0130 06:59:36.066804 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6qzp\" (UniqueName: \"kubernetes.io/projected/25580660-5d8e-46d0-938b-c9f3aba9b8d7-kube-api-access-w6qzp\") pod \"25580660-5d8e-46d0-938b-c9f3aba9b8d7\" (UID: \"25580660-5d8e-46d0-938b-c9f3aba9b8d7\") " Jan 30 06:59:36 crc kubenswrapper[4520]: I0130 06:59:36.066911 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25580660-5d8e-46d0-938b-c9f3aba9b8d7-ovsdbserver-nb\") pod \"25580660-5d8e-46d0-938b-c9f3aba9b8d7\" (UID: \"25580660-5d8e-46d0-938b-c9f3aba9b8d7\") " Jan 30 06:59:36 crc kubenswrapper[4520]: I0130 06:59:36.072314 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25580660-5d8e-46d0-938b-c9f3aba9b8d7-kube-api-access-w6qzp" (OuterVolumeSpecName: "kube-api-access-w6qzp") pod "25580660-5d8e-46d0-938b-c9f3aba9b8d7" (UID: "25580660-5d8e-46d0-938b-c9f3aba9b8d7"). InnerVolumeSpecName "kube-api-access-w6qzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:59:36 crc kubenswrapper[4520]: I0130 06:59:36.102096 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25580660-5d8e-46d0-938b-c9f3aba9b8d7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "25580660-5d8e-46d0-938b-c9f3aba9b8d7" (UID: "25580660-5d8e-46d0-938b-c9f3aba9b8d7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:59:36 crc kubenswrapper[4520]: I0130 06:59:36.103057 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25580660-5d8e-46d0-938b-c9f3aba9b8d7-config" (OuterVolumeSpecName: "config") pod "25580660-5d8e-46d0-938b-c9f3aba9b8d7" (UID: "25580660-5d8e-46d0-938b-c9f3aba9b8d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:59:36 crc kubenswrapper[4520]: I0130 06:59:36.104766 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25580660-5d8e-46d0-938b-c9f3aba9b8d7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "25580660-5d8e-46d0-938b-c9f3aba9b8d7" (UID: "25580660-5d8e-46d0-938b-c9f3aba9b8d7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:59:36 crc kubenswrapper[4520]: I0130 06:59:36.105102 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25580660-5d8e-46d0-938b-c9f3aba9b8d7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "25580660-5d8e-46d0-938b-c9f3aba9b8d7" (UID: "25580660-5d8e-46d0-938b-c9f3aba9b8d7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:59:36 crc kubenswrapper[4520]: I0130 06:59:36.169847 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25580660-5d8e-46d0-938b-c9f3aba9b8d7-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:36 crc kubenswrapper[4520]: I0130 06:59:36.169886 4520 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25580660-5d8e-46d0-938b-c9f3aba9b8d7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:36 crc kubenswrapper[4520]: I0130 06:59:36.169899 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6qzp\" (UniqueName: \"kubernetes.io/projected/25580660-5d8e-46d0-938b-c9f3aba9b8d7-kube-api-access-w6qzp\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:36 crc kubenswrapper[4520]: I0130 06:59:36.169910 4520 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25580660-5d8e-46d0-938b-c9f3aba9b8d7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:36 crc kubenswrapper[4520]: I0130 06:59:36.169921 4520 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25580660-5d8e-46d0-938b-c9f3aba9b8d7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:36 crc kubenswrapper[4520]: I0130 06:59:36.619459 4520 generic.go:334] "Generic (PLEG): container finished" podID="25580660-5d8e-46d0-938b-c9f3aba9b8d7" containerID="93b15bb785b33a2c82abd7f3b969bf00a1a47cfc7dc8959ddaad344de3ccc778" exitCode=0 Jan 30 06:59:36 crc kubenswrapper[4520]: I0130 06:59:36.619688 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-667d99665-jh7qq" event={"ID":"25580660-5d8e-46d0-938b-c9f3aba9b8d7","Type":"ContainerDied","Data":"93b15bb785b33a2c82abd7f3b969bf00a1a47cfc7dc8959ddaad344de3ccc778"} Jan 30 06:59:36 crc kubenswrapper[4520]: I0130 06:59:36.619891 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-667d99665-jh7qq" event={"ID":"25580660-5d8e-46d0-938b-c9f3aba9b8d7","Type":"ContainerDied","Data":"7b2ef2d87c42e2044385a1e6db013ba33798c62804d322a17d99024b533b6988"} Jan 30 06:59:36 crc kubenswrapper[4520]: I0130 06:59:36.619768 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-667d99665-jh7qq" Jan 30 06:59:36 crc kubenswrapper[4520]: I0130 06:59:36.619913 4520 scope.go:117] "RemoveContainer" containerID="93b15bb785b33a2c82abd7f3b969bf00a1a47cfc7dc8959ddaad344de3ccc778" Jan 30 06:59:36 crc kubenswrapper[4520]: I0130 06:59:36.622357 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bcc75fb87-pcx4j" event={"ID":"20e16608-f957-4e8c-b9d2-63718bd0342e","Type":"ContainerStarted","Data":"ce05a23d78b97a2b22eb56697a31bdacb3a51060391208afe0914e8aec8db6f5"} Jan 30 06:59:36 crc kubenswrapper[4520]: I0130 06:59:36.622896 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bcc75fb87-pcx4j" Jan 30 06:59:36 crc kubenswrapper[4520]: I0130 06:59:36.648758 4520 scope.go:117] "RemoveContainer" containerID="fe3f70adc3fb05bff4a043f9c7839541ef2497e57d317aa3c67bfaefd01948c6" Jan 30 06:59:36 crc kubenswrapper[4520]: I0130 06:59:36.652361 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bcc75fb87-pcx4j" podStartSLOduration=2.6523412779999997 podStartE2EDuration="2.652341278s" podCreationTimestamp="2026-01-30 06:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:59:36.64909224 +0000 UTC m=+890.277444420" watchObservedRunningTime="2026-01-30 06:59:36.652341278 +0000 UTC m=+890.280693459" Jan 30 06:59:36 crc kubenswrapper[4520]: I0130 06:59:36.674739 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-667d99665-jh7qq"] Jan 30 06:59:36 crc kubenswrapper[4520]: I0130 06:59:36.681884 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-667d99665-jh7qq"] Jan 30 06:59:36 crc kubenswrapper[4520]: I0130 06:59:36.682829 4520 scope.go:117] "RemoveContainer" containerID="93b15bb785b33a2c82abd7f3b969bf00a1a47cfc7dc8959ddaad344de3ccc778" Jan 30 06:59:36 crc kubenswrapper[4520]: E0130 06:59:36.685209 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93b15bb785b33a2c82abd7f3b969bf00a1a47cfc7dc8959ddaad344de3ccc778\": container with ID starting with 93b15bb785b33a2c82abd7f3b969bf00a1a47cfc7dc8959ddaad344de3ccc778 not found: ID does not exist" containerID="93b15bb785b33a2c82abd7f3b969bf00a1a47cfc7dc8959ddaad344de3ccc778" Jan 30 06:59:36 crc kubenswrapper[4520]: I0130 06:59:36.685244 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93b15bb785b33a2c82abd7f3b969bf00a1a47cfc7dc8959ddaad344de3ccc778"} err="failed to get container status \"93b15bb785b33a2c82abd7f3b969bf00a1a47cfc7dc8959ddaad344de3ccc778\": rpc error: code = NotFound desc = could not find container \"93b15bb785b33a2c82abd7f3b969bf00a1a47cfc7dc8959ddaad344de3ccc778\": container with ID starting with 93b15bb785b33a2c82abd7f3b969bf00a1a47cfc7dc8959ddaad344de3ccc778 not found: ID does not exist" Jan 30 06:59:36 crc kubenswrapper[4520]: I0130 06:59:36.685283 4520 scope.go:117] "RemoveContainer" containerID="fe3f70adc3fb05bff4a043f9c7839541ef2497e57d317aa3c67bfaefd01948c6" Jan 30 06:59:36 crc kubenswrapper[4520]: E0130 06:59:36.685603 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe3f70adc3fb05bff4a043f9c7839541ef2497e57d317aa3c67bfaefd01948c6\": container with ID starting with fe3f70adc3fb05bff4a043f9c7839541ef2497e57d317aa3c67bfaefd01948c6 not found: ID does not exist" containerID="fe3f70adc3fb05bff4a043f9c7839541ef2497e57d317aa3c67bfaefd01948c6" Jan 30 06:59:36 crc kubenswrapper[4520]: I0130 06:59:36.685670 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe3f70adc3fb05bff4a043f9c7839541ef2497e57d317aa3c67bfaefd01948c6"} err="failed to get container status \"fe3f70adc3fb05bff4a043f9c7839541ef2497e57d317aa3c67bfaefd01948c6\": rpc error: code = NotFound desc = could not find container \"fe3f70adc3fb05bff4a043f9c7839541ef2497e57d317aa3c67bfaefd01948c6\": container with ID starting with fe3f70adc3fb05bff4a043f9c7839541ef2497e57d317aa3c67bfaefd01948c6 not found: ID does not exist" Jan 30 06:59:36 crc kubenswrapper[4520]: I0130 06:59:36.692681 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25580660-5d8e-46d0-938b-c9f3aba9b8d7" path="/var/lib/kubelet/pods/25580660-5d8e-46d0-938b-c9f3aba9b8d7/volumes" Jan 30 06:59:39 crc kubenswrapper[4520]: I0130 06:59:39.696756 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.030915 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.144172 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-kljc9"] Jan 30 06:59:40 crc kubenswrapper[4520]: E0130 06:59:40.144692 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25580660-5d8e-46d0-938b-c9f3aba9b8d7" containerName="init" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.144767 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="25580660-5d8e-46d0-938b-c9f3aba9b8d7" containerName="init" Jan 30 06:59:40 crc kubenswrapper[4520]: E0130 06:59:40.144846 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25580660-5d8e-46d0-938b-c9f3aba9b8d7" containerName="dnsmasq-dns" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.144899 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="25580660-5d8e-46d0-938b-c9f3aba9b8d7" containerName="dnsmasq-dns" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.145082 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="25580660-5d8e-46d0-938b-c9f3aba9b8d7" containerName="dnsmasq-dns" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.145632 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-kljc9" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.182476 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-kljc9"] Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.246407 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db4d1798-73e8-4315-87d5-e638d87abfd5-operator-scripts\") pod \"heat-db-create-kljc9\" (UID: \"db4d1798-73e8-4315-87d5-e638d87abfd5\") " pod="openstack/heat-db-create-kljc9" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.246566 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s455s\" (UniqueName: \"kubernetes.io/projected/db4d1798-73e8-4315-87d5-e638d87abfd5-kube-api-access-s455s\") pod \"heat-db-create-kljc9\" (UID: \"db4d1798-73e8-4315-87d5-e638d87abfd5\") " pod="openstack/heat-db-create-kljc9" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.290347 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-qccxn"] Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.291584 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qccxn" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.313901 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-qccxn"] Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.349225 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s455s\" (UniqueName: \"kubernetes.io/projected/db4d1798-73e8-4315-87d5-e638d87abfd5-kube-api-access-s455s\") pod \"heat-db-create-kljc9\" (UID: \"db4d1798-73e8-4315-87d5-e638d87abfd5\") " pod="openstack/heat-db-create-kljc9" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.349350 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db4d1798-73e8-4315-87d5-e638d87abfd5-operator-scripts\") pod \"heat-db-create-kljc9\" (UID: \"db4d1798-73e8-4315-87d5-e638d87abfd5\") " pod="openstack/heat-db-create-kljc9" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.349940 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db4d1798-73e8-4315-87d5-e638d87abfd5-operator-scripts\") pod \"heat-db-create-kljc9\" (UID: \"db4d1798-73e8-4315-87d5-e638d87abfd5\") " pod="openstack/heat-db-create-kljc9" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.378254 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s455s\" (UniqueName: \"kubernetes.io/projected/db4d1798-73e8-4315-87d5-e638d87abfd5-kube-api-access-s455s\") pod \"heat-db-create-kljc9\" (UID: \"db4d1798-73e8-4315-87d5-e638d87abfd5\") " pod="openstack/heat-db-create-kljc9" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.378633 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-ac5a-account-create-update-xh94n"] Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.379480 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ac5a-account-create-update-xh94n" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.417250 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.452577 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-ac5a-account-create-update-xh94n"] Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.453029 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwfsd\" (UniqueName: \"kubernetes.io/projected/839c4efd-2ebb-43d0-9bdb-8dcd83737a8a-kube-api-access-dwfsd\") pod \"barbican-ac5a-account-create-update-xh94n\" (UID: \"839c4efd-2ebb-43d0-9bdb-8dcd83737a8a\") " pod="openstack/barbican-ac5a-account-create-update-xh94n" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.453099 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/839c4efd-2ebb-43d0-9bdb-8dcd83737a8a-operator-scripts\") pod \"barbican-ac5a-account-create-update-xh94n\" (UID: \"839c4efd-2ebb-43d0-9bdb-8dcd83737a8a\") " pod="openstack/barbican-ac5a-account-create-update-xh94n" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.453179 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4564e1a-9135-4edd-842b-e4954834ae5d-operator-scripts\") pod \"cinder-db-create-qccxn\" (UID: \"d4564e1a-9135-4edd-842b-e4954834ae5d\") " pod="openstack/cinder-db-create-qccxn" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.453244 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqfjg\" (UniqueName: \"kubernetes.io/projected/d4564e1a-9135-4edd-842b-e4954834ae5d-kube-api-access-hqfjg\") pod \"cinder-db-create-qccxn\" (UID: \"d4564e1a-9135-4edd-842b-e4954834ae5d\") " pod="openstack/cinder-db-create-qccxn" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.464074 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-kljc9" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.523407 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-1396-account-create-update-qtf79"] Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.526550 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-1396-account-create-update-qtf79" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.528757 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.557029 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwfsd\" (UniqueName: \"kubernetes.io/projected/839c4efd-2ebb-43d0-9bdb-8dcd83737a8a-kube-api-access-dwfsd\") pod \"barbican-ac5a-account-create-update-xh94n\" (UID: \"839c4efd-2ebb-43d0-9bdb-8dcd83737a8a\") " pod="openstack/barbican-ac5a-account-create-update-xh94n" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.557068 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/839c4efd-2ebb-43d0-9bdb-8dcd83737a8a-operator-scripts\") pod \"barbican-ac5a-account-create-update-xh94n\" (UID: \"839c4efd-2ebb-43d0-9bdb-8dcd83737a8a\") " pod="openstack/barbican-ac5a-account-create-update-xh94n" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.557119 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4564e1a-9135-4edd-842b-e4954834ae5d-operator-scripts\") pod \"cinder-db-create-qccxn\" (UID: \"d4564e1a-9135-4edd-842b-e4954834ae5d\") " pod="openstack/cinder-db-create-qccxn" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.557151 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqfjg\" (UniqueName: \"kubernetes.io/projected/d4564e1a-9135-4edd-842b-e4954834ae5d-kube-api-access-hqfjg\") pod \"cinder-db-create-qccxn\" (UID: \"d4564e1a-9135-4edd-842b-e4954834ae5d\") " pod="openstack/cinder-db-create-qccxn" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.558116 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4564e1a-9135-4edd-842b-e4954834ae5d-operator-scripts\") pod \"cinder-db-create-qccxn\" (UID: \"d4564e1a-9135-4edd-842b-e4954834ae5d\") " pod="openstack/cinder-db-create-qccxn" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.558102 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/839c4efd-2ebb-43d0-9bdb-8dcd83737a8a-operator-scripts\") pod \"barbican-ac5a-account-create-update-xh94n\" (UID: \"839c4efd-2ebb-43d0-9bdb-8dcd83737a8a\") " pod="openstack/barbican-ac5a-account-create-update-xh94n" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.576295 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-1396-account-create-update-qtf79"] Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.620885 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-qhxqf"] Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.621918 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qhxqf" Jan 30 06:59:40 crc kubenswrapper[4520]: W0130 06:59:40.657130 4520 reflector.go:561] object-"openstack"/"keystone-keystone-dockercfg-jddpd": failed to list *v1.Secret: secrets "keystone-keystone-dockercfg-jddpd" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 30 06:59:40 crc kubenswrapper[4520]: E0130 06:59:40.657380 4520 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"keystone-keystone-dockercfg-jddpd\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"keystone-keystone-dockercfg-jddpd\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 06:59:40 crc kubenswrapper[4520]: W0130 06:59:40.657442 4520 reflector.go:561] object-"openstack"/"keystone-scripts": failed to list *v1.Secret: secrets "keystone-scripts" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 30 06:59:40 crc kubenswrapper[4520]: E0130 06:59:40.657457 4520 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"keystone-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"keystone-scripts\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 06:59:40 crc kubenswrapper[4520]: W0130 06:59:40.657490 4520 reflector.go:561] object-"openstack"/"keystone-config-data": failed to list *v1.Secret: secrets "keystone-config-data" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 30 06:59:40 crc kubenswrapper[4520]: E0130 06:59:40.657529 4520 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"keystone-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"keystone-config-data\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 06:59:40 crc kubenswrapper[4520]: W0130 06:59:40.657615 4520 reflector.go:561] object-"openstack"/"keystone": failed to list *v1.Secret: secrets "keystone" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 30 06:59:40 crc kubenswrapper[4520]: E0130 06:59:40.657635 4520 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"keystone\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"keystone\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.658922 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzfkt\" (UniqueName: \"kubernetes.io/projected/f732baff-71b8-4edc-8ec9-ebf30a096f74-kube-api-access-wzfkt\") pod \"keystone-db-sync-qhxqf\" (UID: \"f732baff-71b8-4edc-8ec9-ebf30a096f74\") " pod="openstack/keystone-db-sync-qhxqf" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.659014 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f732baff-71b8-4edc-8ec9-ebf30a096f74-combined-ca-bundle\") pod \"keystone-db-sync-qhxqf\" (UID: \"f732baff-71b8-4edc-8ec9-ebf30a096f74\") " pod="openstack/keystone-db-sync-qhxqf" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.659054 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f732baff-71b8-4edc-8ec9-ebf30a096f74-config-data\") pod \"keystone-db-sync-qhxqf\" (UID: \"f732baff-71b8-4edc-8ec9-ebf30a096f74\") " pod="openstack/keystone-db-sync-qhxqf" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.659075 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88a57446-d8a7-45ce-ac2a-1704429731a7-operator-scripts\") pod \"heat-1396-account-create-update-qtf79\" (UID: \"88a57446-d8a7-45ce-ac2a-1704429731a7\") " pod="openstack/heat-1396-account-create-update-qtf79" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.659145 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs8gl\" (UniqueName: \"kubernetes.io/projected/88a57446-d8a7-45ce-ac2a-1704429731a7-kube-api-access-vs8gl\") pod \"heat-1396-account-create-update-qtf79\" (UID: \"88a57446-d8a7-45ce-ac2a-1704429731a7\") " pod="openstack/heat-1396-account-create-update-qtf79" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.665175 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-k9p6j"] Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.673318 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-k9p6j" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.674884 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwfsd\" (UniqueName: \"kubernetes.io/projected/839c4efd-2ebb-43d0-9bdb-8dcd83737a8a-kube-api-access-dwfsd\") pod \"barbican-ac5a-account-create-update-xh94n\" (UID: \"839c4efd-2ebb-43d0-9bdb-8dcd83737a8a\") " pod="openstack/barbican-ac5a-account-create-update-xh94n" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.722224 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqfjg\" (UniqueName: \"kubernetes.io/projected/d4564e1a-9135-4edd-842b-e4954834ae5d-kube-api-access-hqfjg\") pod \"cinder-db-create-qccxn\" (UID: \"d4564e1a-9135-4edd-842b-e4954834ae5d\") " pod="openstack/cinder-db-create-qccxn" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.761174 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzfkt\" (UniqueName: \"kubernetes.io/projected/f732baff-71b8-4edc-8ec9-ebf30a096f74-kube-api-access-wzfkt\") pod \"keystone-db-sync-qhxqf\" (UID: \"f732baff-71b8-4edc-8ec9-ebf30a096f74\") " pod="openstack/keystone-db-sync-qhxqf" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.761345 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f732baff-71b8-4edc-8ec9-ebf30a096f74-combined-ca-bundle\") pod \"keystone-db-sync-qhxqf\" (UID: \"f732baff-71b8-4edc-8ec9-ebf30a096f74\") " pod="openstack/keystone-db-sync-qhxqf" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.761384 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1de5d64c-937a-41c9-b68c-8832b18aabf1-operator-scripts\") pod \"barbican-db-create-k9p6j\" (UID: \"1de5d64c-937a-41c9-b68c-8832b18aabf1\") " pod="openstack/barbican-db-create-k9p6j" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.761418 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5nc5\" (UniqueName: \"kubernetes.io/projected/1de5d64c-937a-41c9-b68c-8832b18aabf1-kube-api-access-h5nc5\") pod \"barbican-db-create-k9p6j\" (UID: \"1de5d64c-937a-41c9-b68c-8832b18aabf1\") " pod="openstack/barbican-db-create-k9p6j" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.761446 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f732baff-71b8-4edc-8ec9-ebf30a096f74-config-data\") pod \"keystone-db-sync-qhxqf\" (UID: \"f732baff-71b8-4edc-8ec9-ebf30a096f74\") " pod="openstack/keystone-db-sync-qhxqf" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.761468 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88a57446-d8a7-45ce-ac2a-1704429731a7-operator-scripts\") pod \"heat-1396-account-create-update-qtf79\" (UID: \"88a57446-d8a7-45ce-ac2a-1704429731a7\") " pod="openstack/heat-1396-account-create-update-qtf79" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.762873 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs8gl\" (UniqueName: \"kubernetes.io/projected/88a57446-d8a7-45ce-ac2a-1704429731a7-kube-api-access-vs8gl\") pod \"heat-1396-account-create-update-qtf79\" (UID: \"88a57446-d8a7-45ce-ac2a-1704429731a7\") " pod="openstack/heat-1396-account-create-update-qtf79" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.764786 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-qhxqf"] Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.765279 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88a57446-d8a7-45ce-ac2a-1704429731a7-operator-scripts\") pod \"heat-1396-account-create-update-qtf79\" (UID: \"88a57446-d8a7-45ce-ac2a-1704429731a7\") " pod="openstack/heat-1396-account-create-update-qtf79" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.766659 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ac5a-account-create-update-xh94n" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.785476 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-k9p6j"] Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.792181 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f732baff-71b8-4edc-8ec9-ebf30a096f74-combined-ca-bundle\") pod \"keystone-db-sync-qhxqf\" (UID: \"f732baff-71b8-4edc-8ec9-ebf30a096f74\") " pod="openstack/keystone-db-sync-qhxqf" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.819202 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzfkt\" (UniqueName: \"kubernetes.io/projected/f732baff-71b8-4edc-8ec9-ebf30a096f74-kube-api-access-wzfkt\") pod \"keystone-db-sync-qhxqf\" (UID: \"f732baff-71b8-4edc-8ec9-ebf30a096f74\") " pod="openstack/keystone-db-sync-qhxqf" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.819258 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-6cc0-account-create-update-bj5pr"] Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.820185 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs8gl\" (UniqueName: \"kubernetes.io/projected/88a57446-d8a7-45ce-ac2a-1704429731a7-kube-api-access-vs8gl\") pod \"heat-1396-account-create-update-qtf79\" (UID: \"88a57446-d8a7-45ce-ac2a-1704429731a7\") " pod="openstack/heat-1396-account-create-update-qtf79" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.820377 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6cc0-account-create-update-bj5pr" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.822713 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.822924 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6cc0-account-create-update-bj5pr"] Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.865691 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79d2z\" (UniqueName: \"kubernetes.io/projected/fdd5fd9c-aeca-4fcd-9efa-f0d5e470b925-kube-api-access-79d2z\") pod \"cinder-6cc0-account-create-update-bj5pr\" (UID: \"fdd5fd9c-aeca-4fcd-9efa-f0d5e470b925\") " pod="openstack/cinder-6cc0-account-create-update-bj5pr" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.865799 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1de5d64c-937a-41c9-b68c-8832b18aabf1-operator-scripts\") pod \"barbican-db-create-k9p6j\" (UID: \"1de5d64c-937a-41c9-b68c-8832b18aabf1\") " pod="openstack/barbican-db-create-k9p6j" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.865834 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdd5fd9c-aeca-4fcd-9efa-f0d5e470b925-operator-scripts\") pod \"cinder-6cc0-account-create-update-bj5pr\" (UID: \"fdd5fd9c-aeca-4fcd-9efa-f0d5e470b925\") " pod="openstack/cinder-6cc0-account-create-update-bj5pr" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.865858 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5nc5\" (UniqueName: \"kubernetes.io/projected/1de5d64c-937a-41c9-b68c-8832b18aabf1-kube-api-access-h5nc5\") pod \"barbican-db-create-k9p6j\" (UID: \"1de5d64c-937a-41c9-b68c-8832b18aabf1\") " pod="openstack/barbican-db-create-k9p6j" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.866579 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1de5d64c-937a-41c9-b68c-8832b18aabf1-operator-scripts\") pod \"barbican-db-create-k9p6j\" (UID: \"1de5d64c-937a-41c9-b68c-8832b18aabf1\") " pod="openstack/barbican-db-create-k9p6j" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.883023 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-1396-account-create-update-qtf79" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.886389 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-p84rh"] Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.887414 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-p84rh" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.912240 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-p84rh"] Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.916980 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qccxn" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.920701 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5nc5\" (UniqueName: \"kubernetes.io/projected/1de5d64c-937a-41c9-b68c-8832b18aabf1-kube-api-access-h5nc5\") pod \"barbican-db-create-k9p6j\" (UID: \"1de5d64c-937a-41c9-b68c-8832b18aabf1\") " pod="openstack/barbican-db-create-k9p6j" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.967692 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvbzn\" (UniqueName: \"kubernetes.io/projected/7ae04536-592c-4d7c-bbeb-8ef1df3370a7-kube-api-access-nvbzn\") pod \"neutron-db-create-p84rh\" (UID: \"7ae04536-592c-4d7c-bbeb-8ef1df3370a7\") " pod="openstack/neutron-db-create-p84rh" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.967837 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79d2z\" (UniqueName: \"kubernetes.io/projected/fdd5fd9c-aeca-4fcd-9efa-f0d5e470b925-kube-api-access-79d2z\") pod \"cinder-6cc0-account-create-update-bj5pr\" (UID: \"fdd5fd9c-aeca-4fcd-9efa-f0d5e470b925\") " pod="openstack/cinder-6cc0-account-create-update-bj5pr" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.967978 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdd5fd9c-aeca-4fcd-9efa-f0d5e470b925-operator-scripts\") pod \"cinder-6cc0-account-create-update-bj5pr\" (UID: \"fdd5fd9c-aeca-4fcd-9efa-f0d5e470b925\") " pod="openstack/cinder-6cc0-account-create-update-bj5pr" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.968091 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ae04536-592c-4d7c-bbeb-8ef1df3370a7-operator-scripts\") pod \"neutron-db-create-p84rh\" (UID: \"7ae04536-592c-4d7c-bbeb-8ef1df3370a7\") " pod="openstack/neutron-db-create-p84rh" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.968768 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdd5fd9c-aeca-4fcd-9efa-f0d5e470b925-operator-scripts\") pod \"cinder-6cc0-account-create-update-bj5pr\" (UID: \"fdd5fd9c-aeca-4fcd-9efa-f0d5e470b925\") " pod="openstack/cinder-6cc0-account-create-update-bj5pr" Jan 30 06:59:40 crc kubenswrapper[4520]: I0130 06:59:40.985971 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79d2z\" (UniqueName: \"kubernetes.io/projected/fdd5fd9c-aeca-4fcd-9efa-f0d5e470b925-kube-api-access-79d2z\") pod \"cinder-6cc0-account-create-update-bj5pr\" (UID: \"fdd5fd9c-aeca-4fcd-9efa-f0d5e470b925\") " pod="openstack/cinder-6cc0-account-create-update-bj5pr" Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.016886 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-b11b-account-create-update-js482"] Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.019985 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b11b-account-create-update-js482" Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.022868 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.026536 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-k9p6j" Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.031058 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-kljc9"] Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.048932 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b11b-account-create-update-js482"] Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.069644 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ae04536-592c-4d7c-bbeb-8ef1df3370a7-operator-scripts\") pod \"neutron-db-create-p84rh\" (UID: \"7ae04536-592c-4d7c-bbeb-8ef1df3370a7\") " pod="openstack/neutron-db-create-p84rh" Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.069705 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c29e2dd5-25c0-4c49-8d73-30db73b5bc36-operator-scripts\") pod \"neutron-b11b-account-create-update-js482\" (UID: \"c29e2dd5-25c0-4c49-8d73-30db73b5bc36\") " pod="openstack/neutron-b11b-account-create-update-js482" Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.069751 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvbzn\" (UniqueName: \"kubernetes.io/projected/7ae04536-592c-4d7c-bbeb-8ef1df3370a7-kube-api-access-nvbzn\") pod \"neutron-db-create-p84rh\" (UID: \"7ae04536-592c-4d7c-bbeb-8ef1df3370a7\") " pod="openstack/neutron-db-create-p84rh" Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.069849 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6lgg\" (UniqueName: \"kubernetes.io/projected/c29e2dd5-25c0-4c49-8d73-30db73b5bc36-kube-api-access-d6lgg\") pod \"neutron-b11b-account-create-update-js482\" (UID: \"c29e2dd5-25c0-4c49-8d73-30db73b5bc36\") " pod="openstack/neutron-b11b-account-create-update-js482" Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.070572 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ae04536-592c-4d7c-bbeb-8ef1df3370a7-operator-scripts\") pod \"neutron-db-create-p84rh\" (UID: \"7ae04536-592c-4d7c-bbeb-8ef1df3370a7\") " pod="openstack/neutron-db-create-p84rh" Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.086628 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvbzn\" (UniqueName: \"kubernetes.io/projected/7ae04536-592c-4d7c-bbeb-8ef1df3370a7-kube-api-access-nvbzn\") pod \"neutron-db-create-p84rh\" (UID: \"7ae04536-592c-4d7c-bbeb-8ef1df3370a7\") " pod="openstack/neutron-db-create-p84rh" Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.144019 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6cc0-account-create-update-bj5pr" Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.171942 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c29e2dd5-25c0-4c49-8d73-30db73b5bc36-operator-scripts\") pod \"neutron-b11b-account-create-update-js482\" (UID: \"c29e2dd5-25c0-4c49-8d73-30db73b5bc36\") " pod="openstack/neutron-b11b-account-create-update-js482" Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.172062 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6lgg\" (UniqueName: \"kubernetes.io/projected/c29e2dd5-25c0-4c49-8d73-30db73b5bc36-kube-api-access-d6lgg\") pod \"neutron-b11b-account-create-update-js482\" (UID: \"c29e2dd5-25c0-4c49-8d73-30db73b5bc36\") " pod="openstack/neutron-b11b-account-create-update-js482" Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.172954 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c29e2dd5-25c0-4c49-8d73-30db73b5bc36-operator-scripts\") pod \"neutron-b11b-account-create-update-js482\" (UID: \"c29e2dd5-25c0-4c49-8d73-30db73b5bc36\") " pod="openstack/neutron-b11b-account-create-update-js482" Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.201568 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6lgg\" (UniqueName: \"kubernetes.io/projected/c29e2dd5-25c0-4c49-8d73-30db73b5bc36-kube-api-access-d6lgg\") pod \"neutron-b11b-account-create-update-js482\" (UID: \"c29e2dd5-25c0-4c49-8d73-30db73b5bc36\") " pod="openstack/neutron-b11b-account-create-update-js482" Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.250759 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-p84rh" Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.326070 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-ac5a-account-create-update-xh94n"] Jan 30 06:59:41 crc kubenswrapper[4520]: W0130 06:59:41.346780 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod839c4efd_2ebb_43d0_9bdb_8dcd83737a8a.slice/crio-5d75f8c20db52633c07c167878fd848929a5fc337e4fd4fd0acf426b001b77d4 WatchSource:0}: Error finding container 5d75f8c20db52633c07c167878fd848929a5fc337e4fd4fd0acf426b001b77d4: Status 404 returned error can't find the container with id 5d75f8c20db52633c07c167878fd848929a5fc337e4fd4fd0acf426b001b77d4 Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.355058 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b11b-account-create-update-js482" Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.372724 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-1396-account-create-update-qtf79"] Jan 30 06:59:41 crc kubenswrapper[4520]: W0130 06:59:41.428151 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88a57446_d8a7_45ce_ac2a_1704429731a7.slice/crio-99a07125254a7060bf0ea31542d000a513400353138d66c371d3895b10037840 WatchSource:0}: Error finding container 99a07125254a7060bf0ea31542d000a513400353138d66c371d3895b10037840: Status 404 returned error can't find the container with id 99a07125254a7060bf0ea31542d000a513400353138d66c371d3895b10037840 Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.718570 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-1396-account-create-update-qtf79" event={"ID":"88a57446-d8a7-45ce-ac2a-1704429731a7","Type":"ContainerStarted","Data":"b1edc1d10fcef42e5c5803ab0bd3d7da3ca1bf726d33280b07680bf4d499eb6a"} Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.719003 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-1396-account-create-update-qtf79" event={"ID":"88a57446-d8a7-45ce-ac2a-1704429731a7","Type":"ContainerStarted","Data":"99a07125254a7060bf0ea31542d000a513400353138d66c371d3895b10037840"} Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.724972 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ac5a-account-create-update-xh94n" event={"ID":"839c4efd-2ebb-43d0-9bdb-8dcd83737a8a","Type":"ContainerStarted","Data":"2bbecffc128a1431bd85a48a173bb01c6bbf667018c7e9cc5e6bb9501c39e6e5"} Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.725002 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ac5a-account-create-update-xh94n" event={"ID":"839c4efd-2ebb-43d0-9bdb-8dcd83737a8a","Type":"ContainerStarted","Data":"5d75f8c20db52633c07c167878fd848929a5fc337e4fd4fd0acf426b001b77d4"} Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.737257 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-1396-account-create-update-qtf79" podStartSLOduration=1.737247525 podStartE2EDuration="1.737247525s" podCreationTimestamp="2026-01-30 06:59:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:59:41.736713702 +0000 UTC m=+895.365065883" watchObservedRunningTime="2026-01-30 06:59:41.737247525 +0000 UTC m=+895.365599707" Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.748166 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-kljc9" event={"ID":"db4d1798-73e8-4315-87d5-e638d87abfd5","Type":"ContainerStarted","Data":"d827e9732e87b1e4d40924886162fe5254321ebf7c4533d3b8e36daf3df66ba0"} Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.748208 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-kljc9" event={"ID":"db4d1798-73e8-4315-87d5-e638d87abfd5","Type":"ContainerStarted","Data":"357b06ba8118d20a8b00b0d07d2030e80330f7036e0f94deff312f561a741168"} Jan 30 06:59:41 crc kubenswrapper[4520]: E0130 06:59:41.764952 4520 secret.go:188] Couldn't get secret openstack/keystone-config-data: failed to sync secret cache: timed out waiting for the condition Jan 30 06:59:41 crc kubenswrapper[4520]: E0130 06:59:41.765022 4520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f732baff-71b8-4edc-8ec9-ebf30a096f74-config-data podName:f732baff-71b8-4edc-8ec9-ebf30a096f74 nodeName:}" failed. No retries permitted until 2026-01-30 06:59:42.265003719 +0000 UTC m=+895.893355900 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/f732baff-71b8-4edc-8ec9-ebf30a096f74-config-data") pod "keystone-db-sync-qhxqf" (UID: "f732baff-71b8-4edc-8ec9-ebf30a096f74") : failed to sync secret cache: timed out waiting for the condition Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.765665 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-k9p6j"] Jan 30 06:59:41 crc kubenswrapper[4520]: W0130 06:59:41.767142 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1de5d64c_937a_41c9_b68c_8832b18aabf1.slice/crio-82e2be413ad27bc1286fcab8b1f5a20607186dcff31795e563251c0752adcb13 WatchSource:0}: Error finding container 82e2be413ad27bc1286fcab8b1f5a20607186dcff31795e563251c0752adcb13: Status 404 returned error can't find the container with id 82e2be413ad27bc1286fcab8b1f5a20607186dcff31795e563251c0752adcb13 Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.768863 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.776277 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-ac5a-account-create-update-xh94n" podStartSLOduration=1.7762463450000001 podStartE2EDuration="1.776246345s" podCreationTimestamp="2026-01-30 06:59:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:59:41.760709095 +0000 UTC m=+895.389061276" watchObservedRunningTime="2026-01-30 06:59:41.776246345 +0000 UTC m=+895.404598527" Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.797063 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-kljc9" podStartSLOduration=1.797047974 podStartE2EDuration="1.797047974s" podCreationTimestamp="2026-01-30 06:59:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:59:41.776004261 +0000 UTC m=+895.404356441" watchObservedRunningTime="2026-01-30 06:59:41.797047974 +0000 UTC m=+895.425400156" Jan 30 06:59:41 crc kubenswrapper[4520]: W0130 06:59:41.817978 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4564e1a_9135_4edd_842b_e4954834ae5d.slice/crio-32fcaf8f5123a65a22314e1dd2e45cb383907a2fd6469652c80e6a3e05a9bc84 WatchSource:0}: Error finding container 32fcaf8f5123a65a22314e1dd2e45cb383907a2fd6469652c80e6a3e05a9bc84: Status 404 returned error can't find the container with id 32fcaf8f5123a65a22314e1dd2e45cb383907a2fd6469652c80e6a3e05a9bc84 Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.837293 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.887410 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-qccxn"] Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.915567 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jddpd" Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.915942 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-p84rh"] Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.929123 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6cc0-account-create-update-bj5pr"] Jan 30 06:59:41 crc kubenswrapper[4520]: I0130 06:59:41.976973 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 06:59:42 crc kubenswrapper[4520]: I0130 06:59:42.104803 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b11b-account-create-update-js482"] Jan 30 06:59:42 crc kubenswrapper[4520]: W0130 06:59:42.112462 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc29e2dd5_25c0_4c49_8d73_30db73b5bc36.slice/crio-7ebb08e90379e98c5b250361cfafaccfe2953a263ac61d9766083228639b6a35 WatchSource:0}: Error finding container 7ebb08e90379e98c5b250361cfafaccfe2953a263ac61d9766083228639b6a35: Status 404 returned error can't find the container with id 7ebb08e90379e98c5b250361cfafaccfe2953a263ac61d9766083228639b6a35 Jan 30 06:59:42 crc kubenswrapper[4520]: I0130 06:59:42.309547 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f732baff-71b8-4edc-8ec9-ebf30a096f74-config-data\") pod \"keystone-db-sync-qhxqf\" (UID: \"f732baff-71b8-4edc-8ec9-ebf30a096f74\") " pod="openstack/keystone-db-sync-qhxqf" Jan 30 06:59:42 crc kubenswrapper[4520]: I0130 06:59:42.316144 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f732baff-71b8-4edc-8ec9-ebf30a096f74-config-data\") pod \"keystone-db-sync-qhxqf\" (UID: \"f732baff-71b8-4edc-8ec9-ebf30a096f74\") " pod="openstack/keystone-db-sync-qhxqf" Jan 30 06:59:42 crc kubenswrapper[4520]: I0130 06:59:42.498075 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qhxqf" Jan 30 06:59:42 crc kubenswrapper[4520]: I0130 06:59:42.756962 4520 generic.go:334] "Generic (PLEG): container finished" podID="db4d1798-73e8-4315-87d5-e638d87abfd5" containerID="d827e9732e87b1e4d40924886162fe5254321ebf7c4533d3b8e36daf3df66ba0" exitCode=0 Jan 30 06:59:42 crc kubenswrapper[4520]: I0130 06:59:42.757632 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-kljc9" event={"ID":"db4d1798-73e8-4315-87d5-e638d87abfd5","Type":"ContainerDied","Data":"d827e9732e87b1e4d40924886162fe5254321ebf7c4533d3b8e36daf3df66ba0"} Jan 30 06:59:42 crc kubenswrapper[4520]: I0130 06:59:42.764297 4520 generic.go:334] "Generic (PLEG): container finished" podID="c29e2dd5-25c0-4c49-8d73-30db73b5bc36" containerID="7fbb3867084f1a5cd03fa0e5aaa627b1ebcfe0697d8ea7a9c13b5ebe902a303f" exitCode=0 Jan 30 06:59:42 crc kubenswrapper[4520]: I0130 06:59:42.764419 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b11b-account-create-update-js482" event={"ID":"c29e2dd5-25c0-4c49-8d73-30db73b5bc36","Type":"ContainerDied","Data":"7fbb3867084f1a5cd03fa0e5aaa627b1ebcfe0697d8ea7a9c13b5ebe902a303f"} Jan 30 06:59:42 crc kubenswrapper[4520]: I0130 06:59:42.764488 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b11b-account-create-update-js482" event={"ID":"c29e2dd5-25c0-4c49-8d73-30db73b5bc36","Type":"ContainerStarted","Data":"7ebb08e90379e98c5b250361cfafaccfe2953a263ac61d9766083228639b6a35"} Jan 30 06:59:42 crc kubenswrapper[4520]: I0130 06:59:42.765623 4520 generic.go:334] "Generic (PLEG): container finished" podID="1de5d64c-937a-41c9-b68c-8832b18aabf1" containerID="5d3f08c4faa574c6d0b2cb26aec43dcf834a75527dd4431b1c1c815ed5cf3015" exitCode=0 Jan 30 06:59:42 crc kubenswrapper[4520]: I0130 06:59:42.765747 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-k9p6j" event={"ID":"1de5d64c-937a-41c9-b68c-8832b18aabf1","Type":"ContainerDied","Data":"5d3f08c4faa574c6d0b2cb26aec43dcf834a75527dd4431b1c1c815ed5cf3015"} Jan 30 06:59:42 crc kubenswrapper[4520]: I0130 06:59:42.765817 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-k9p6j" event={"ID":"1de5d64c-937a-41c9-b68c-8832b18aabf1","Type":"ContainerStarted","Data":"82e2be413ad27bc1286fcab8b1f5a20607186dcff31795e563251c0752adcb13"} Jan 30 06:59:42 crc kubenswrapper[4520]: I0130 06:59:42.766972 4520 generic.go:334] "Generic (PLEG): container finished" podID="fdd5fd9c-aeca-4fcd-9efa-f0d5e470b925" containerID="3c3de59591f4d8d268f7172f764b47db2d324b4622902239d22e6722b65411b5" exitCode=0 Jan 30 06:59:42 crc kubenswrapper[4520]: I0130 06:59:42.767076 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6cc0-account-create-update-bj5pr" event={"ID":"fdd5fd9c-aeca-4fcd-9efa-f0d5e470b925","Type":"ContainerDied","Data":"3c3de59591f4d8d268f7172f764b47db2d324b4622902239d22e6722b65411b5"} Jan 30 06:59:42 crc kubenswrapper[4520]: I0130 06:59:42.767138 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6cc0-account-create-update-bj5pr" event={"ID":"fdd5fd9c-aeca-4fcd-9efa-f0d5e470b925","Type":"ContainerStarted","Data":"595c9d972cb0bd8ad486fc6ec52aa5bc7ec32148d8fde4d7c7004993fa599f33"} Jan 30 06:59:42 crc kubenswrapper[4520]: I0130 06:59:42.768270 4520 generic.go:334] "Generic (PLEG): container finished" podID="88a57446-d8a7-45ce-ac2a-1704429731a7" containerID="b1edc1d10fcef42e5c5803ab0bd3d7da3ca1bf726d33280b07680bf4d499eb6a" exitCode=0 Jan 30 06:59:42 crc kubenswrapper[4520]: I0130 06:59:42.768374 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-1396-account-create-update-qtf79" event={"ID":"88a57446-d8a7-45ce-ac2a-1704429731a7","Type":"ContainerDied","Data":"b1edc1d10fcef42e5c5803ab0bd3d7da3ca1bf726d33280b07680bf4d499eb6a"} Jan 30 06:59:42 crc kubenswrapper[4520]: I0130 06:59:42.776404 4520 generic.go:334] "Generic (PLEG): container finished" podID="d4564e1a-9135-4edd-842b-e4954834ae5d" containerID="b2bb100ba44a7fbaeb33c0fa46c1c5aa4d4088d24572ea65caa31dec2a0d9076" exitCode=0 Jan 30 06:59:42 crc kubenswrapper[4520]: I0130 06:59:42.777172 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qccxn" event={"ID":"d4564e1a-9135-4edd-842b-e4954834ae5d","Type":"ContainerDied","Data":"b2bb100ba44a7fbaeb33c0fa46c1c5aa4d4088d24572ea65caa31dec2a0d9076"} Jan 30 06:59:42 crc kubenswrapper[4520]: I0130 06:59:42.777328 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qccxn" event={"ID":"d4564e1a-9135-4edd-842b-e4954834ae5d","Type":"ContainerStarted","Data":"32fcaf8f5123a65a22314e1dd2e45cb383907a2fd6469652c80e6a3e05a9bc84"} Jan 30 06:59:42 crc kubenswrapper[4520]: I0130 06:59:42.778614 4520 generic.go:334] "Generic (PLEG): container finished" podID="839c4efd-2ebb-43d0-9bdb-8dcd83737a8a" containerID="2bbecffc128a1431bd85a48a173bb01c6bbf667018c7e9cc5e6bb9501c39e6e5" exitCode=0 Jan 30 06:59:42 crc kubenswrapper[4520]: I0130 06:59:42.778734 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ac5a-account-create-update-xh94n" event={"ID":"839c4efd-2ebb-43d0-9bdb-8dcd83737a8a","Type":"ContainerDied","Data":"2bbecffc128a1431bd85a48a173bb01c6bbf667018c7e9cc5e6bb9501c39e6e5"} Jan 30 06:59:42 crc kubenswrapper[4520]: I0130 06:59:42.780479 4520 generic.go:334] "Generic (PLEG): container finished" podID="7ae04536-592c-4d7c-bbeb-8ef1df3370a7" containerID="28b6ac33e87ee15d265f0f8151e19a85c43f78fcd6194bdcd67b8c5c90ea3bf1" exitCode=0 Jan 30 06:59:42 crc kubenswrapper[4520]: I0130 06:59:42.780569 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-p84rh" event={"ID":"7ae04536-592c-4d7c-bbeb-8ef1df3370a7","Type":"ContainerDied","Data":"28b6ac33e87ee15d265f0f8151e19a85c43f78fcd6194bdcd67b8c5c90ea3bf1"} Jan 30 06:59:42 crc kubenswrapper[4520]: I0130 06:59:42.780599 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-p84rh" event={"ID":"7ae04536-592c-4d7c-bbeb-8ef1df3370a7","Type":"ContainerStarted","Data":"3695e599dd3095ff7ba63a94c92cda16d7c1d7df3c354fc788c526b9ec9a2e25"} Jan 30 06:59:42 crc kubenswrapper[4520]: I0130 06:59:42.964280 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-qhxqf"] Jan 30 06:59:43 crc kubenswrapper[4520]: I0130 06:59:43.791085 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qhxqf" event={"ID":"f732baff-71b8-4edc-8ec9-ebf30a096f74","Type":"ContainerStarted","Data":"f4332c523d6526dc0f7a408de6b35f681f31ddd07087a6631752bfd6abdb9ebb"} Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.344037 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b11b-account-create-update-js482" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.380721 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bcc75fb87-pcx4j" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.461096 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd8468b69-r99hr"] Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.461497 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cd8468b69-r99hr" podUID="d4fdb7e3-5390-4912-8331-36f326f97d7c" containerName="dnsmasq-dns" containerID="cri-o://269bfec4b7622aa7423843d986a593d6b111a79ffae4958811e2e3431f60f5bc" gracePeriod=10 Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.470903 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c29e2dd5-25c0-4c49-8d73-30db73b5bc36-operator-scripts\") pod \"c29e2dd5-25c0-4c49-8d73-30db73b5bc36\" (UID: \"c29e2dd5-25c0-4c49-8d73-30db73b5bc36\") " Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.471002 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6lgg\" (UniqueName: \"kubernetes.io/projected/c29e2dd5-25c0-4c49-8d73-30db73b5bc36-kube-api-access-d6lgg\") pod \"c29e2dd5-25c0-4c49-8d73-30db73b5bc36\" (UID: \"c29e2dd5-25c0-4c49-8d73-30db73b5bc36\") " Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.472943 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c29e2dd5-25c0-4c49-8d73-30db73b5bc36-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c29e2dd5-25c0-4c49-8d73-30db73b5bc36" (UID: "c29e2dd5-25c0-4c49-8d73-30db73b5bc36"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.499764 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c29e2dd5-25c0-4c49-8d73-30db73b5bc36-kube-api-access-d6lgg" (OuterVolumeSpecName: "kube-api-access-d6lgg") pod "c29e2dd5-25c0-4c49-8d73-30db73b5bc36" (UID: "c29e2dd5-25c0-4c49-8d73-30db73b5bc36"). InnerVolumeSpecName "kube-api-access-d6lgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.574533 4520 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c29e2dd5-25c0-4c49-8d73-30db73b5bc36-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.574562 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6lgg\" (UniqueName: \"kubernetes.io/projected/c29e2dd5-25c0-4c49-8d73-30db73b5bc36-kube-api-access-d6lgg\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.622530 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-kljc9" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.628853 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-p84rh" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.643004 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-1396-account-create-update-qtf79" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.675357 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s455s\" (UniqueName: \"kubernetes.io/projected/db4d1798-73e8-4315-87d5-e638d87abfd5-kube-api-access-s455s\") pod \"db4d1798-73e8-4315-87d5-e638d87abfd5\" (UID: \"db4d1798-73e8-4315-87d5-e638d87abfd5\") " Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.675499 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db4d1798-73e8-4315-87d5-e638d87abfd5-operator-scripts\") pod \"db4d1798-73e8-4315-87d5-e638d87abfd5\" (UID: \"db4d1798-73e8-4315-87d5-e638d87abfd5\") " Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.677843 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db4d1798-73e8-4315-87d5-e638d87abfd5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "db4d1798-73e8-4315-87d5-e638d87abfd5" (UID: "db4d1798-73e8-4315-87d5-e638d87abfd5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.689120 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db4d1798-73e8-4315-87d5-e638d87abfd5-kube-api-access-s455s" (OuterVolumeSpecName: "kube-api-access-s455s") pod "db4d1798-73e8-4315-87d5-e638d87abfd5" (UID: "db4d1798-73e8-4315-87d5-e638d87abfd5"). InnerVolumeSpecName "kube-api-access-s455s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.775367 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qccxn" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.777493 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88a57446-d8a7-45ce-ac2a-1704429731a7-operator-scripts\") pod \"88a57446-d8a7-45ce-ac2a-1704429731a7\" (UID: \"88a57446-d8a7-45ce-ac2a-1704429731a7\") " Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.777585 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ae04536-592c-4d7c-bbeb-8ef1df3370a7-operator-scripts\") pod \"7ae04536-592c-4d7c-bbeb-8ef1df3370a7\" (UID: \"7ae04536-592c-4d7c-bbeb-8ef1df3370a7\") " Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.778402 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vs8gl\" (UniqueName: \"kubernetes.io/projected/88a57446-d8a7-45ce-ac2a-1704429731a7-kube-api-access-vs8gl\") pod \"88a57446-d8a7-45ce-ac2a-1704429731a7\" (UID: \"88a57446-d8a7-45ce-ac2a-1704429731a7\") " Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.778457 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvbzn\" (UniqueName: \"kubernetes.io/projected/7ae04536-592c-4d7c-bbeb-8ef1df3370a7-kube-api-access-nvbzn\") pod \"7ae04536-592c-4d7c-bbeb-8ef1df3370a7\" (UID: \"7ae04536-592c-4d7c-bbeb-8ef1df3370a7\") " Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.779395 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s455s\" (UniqueName: \"kubernetes.io/projected/db4d1798-73e8-4315-87d5-e638d87abfd5-kube-api-access-s455s\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.779410 4520 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db4d1798-73e8-4315-87d5-e638d87abfd5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.780974 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88a57446-d8a7-45ce-ac2a-1704429731a7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "88a57446-d8a7-45ce-ac2a-1704429731a7" (UID: "88a57446-d8a7-45ce-ac2a-1704429731a7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.781355 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ae04536-592c-4d7c-bbeb-8ef1df3370a7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7ae04536-592c-4d7c-bbeb-8ef1df3370a7" (UID: "7ae04536-592c-4d7c-bbeb-8ef1df3370a7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.792020 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88a57446-d8a7-45ce-ac2a-1704429731a7-kube-api-access-vs8gl" (OuterVolumeSpecName: "kube-api-access-vs8gl") pod "88a57446-d8a7-45ce-ac2a-1704429731a7" (UID: "88a57446-d8a7-45ce-ac2a-1704429731a7"). InnerVolumeSpecName "kube-api-access-vs8gl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.802719 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ac5a-account-create-update-xh94n" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.806025 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6cc0-account-create-update-bj5pr" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.811789 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ae04536-592c-4d7c-bbeb-8ef1df3370a7-kube-api-access-nvbzn" (OuterVolumeSpecName: "kube-api-access-nvbzn") pod "7ae04536-592c-4d7c-bbeb-8ef1df3370a7" (UID: "7ae04536-592c-4d7c-bbeb-8ef1df3370a7"). InnerVolumeSpecName "kube-api-access-nvbzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.812989 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-k9p6j" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.813305 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ac5a-account-create-update-xh94n" event={"ID":"839c4efd-2ebb-43d0-9bdb-8dcd83737a8a","Type":"ContainerDied","Data":"5d75f8c20db52633c07c167878fd848929a5fc337e4fd4fd0acf426b001b77d4"} Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.816347 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d75f8c20db52633c07c167878fd848929a5fc337e4fd4fd0acf426b001b77d4" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.813351 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ac5a-account-create-update-xh94n" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.828045 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-p84rh" event={"ID":"7ae04536-592c-4d7c-bbeb-8ef1df3370a7","Type":"ContainerDied","Data":"3695e599dd3095ff7ba63a94c92cda16d7c1d7df3c354fc788c526b9ec9a2e25"} Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.828120 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-p84rh" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.828133 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3695e599dd3095ff7ba63a94c92cda16d7c1d7df3c354fc788c526b9ec9a2e25" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.835422 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-kljc9" event={"ID":"db4d1798-73e8-4315-87d5-e638d87abfd5","Type":"ContainerDied","Data":"357b06ba8118d20a8b00b0d07d2030e80330f7036e0f94deff312f561a741168"} Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.835455 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="357b06ba8118d20a8b00b0d07d2030e80330f7036e0f94deff312f561a741168" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.835402 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-kljc9" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.868010 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b11b-account-create-update-js482" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.868030 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b11b-account-create-update-js482" event={"ID":"c29e2dd5-25c0-4c49-8d73-30db73b5bc36","Type":"ContainerDied","Data":"7ebb08e90379e98c5b250361cfafaccfe2953a263ac61d9766083228639b6a35"} Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.868059 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ebb08e90379e98c5b250361cfafaccfe2953a263ac61d9766083228639b6a35" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.878901 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6cc0-account-create-update-bj5pr" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.878921 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6cc0-account-create-update-bj5pr" event={"ID":"fdd5fd9c-aeca-4fcd-9efa-f0d5e470b925","Type":"ContainerDied","Data":"595c9d972cb0bd8ad486fc6ec52aa5bc7ec32148d8fde4d7c7004993fa599f33"} Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.879241 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="595c9d972cb0bd8ad486fc6ec52aa5bc7ec32148d8fde4d7c7004993fa599f33" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.880200 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwfsd\" (UniqueName: \"kubernetes.io/projected/839c4efd-2ebb-43d0-9bdb-8dcd83737a8a-kube-api-access-dwfsd\") pod \"839c4efd-2ebb-43d0-9bdb-8dcd83737a8a\" (UID: \"839c4efd-2ebb-43d0-9bdb-8dcd83737a8a\") " Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.880291 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5nc5\" (UniqueName: \"kubernetes.io/projected/1de5d64c-937a-41c9-b68c-8832b18aabf1-kube-api-access-h5nc5\") pod \"1de5d64c-937a-41c9-b68c-8832b18aabf1\" (UID: \"1de5d64c-937a-41c9-b68c-8832b18aabf1\") " Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.880318 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/839c4efd-2ebb-43d0-9bdb-8dcd83737a8a-operator-scripts\") pod \"839c4efd-2ebb-43d0-9bdb-8dcd83737a8a\" (UID: \"839c4efd-2ebb-43d0-9bdb-8dcd83737a8a\") " Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.880334 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1de5d64c-937a-41c9-b68c-8832b18aabf1-operator-scripts\") pod \"1de5d64c-937a-41c9-b68c-8832b18aabf1\" (UID: \"1de5d64c-937a-41c9-b68c-8832b18aabf1\") " Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.880415 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqfjg\" (UniqueName: \"kubernetes.io/projected/d4564e1a-9135-4edd-842b-e4954834ae5d-kube-api-access-hqfjg\") pod \"d4564e1a-9135-4edd-842b-e4954834ae5d\" (UID: \"d4564e1a-9135-4edd-842b-e4954834ae5d\") " Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.880454 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79d2z\" (UniqueName: \"kubernetes.io/projected/fdd5fd9c-aeca-4fcd-9efa-f0d5e470b925-kube-api-access-79d2z\") pod \"fdd5fd9c-aeca-4fcd-9efa-f0d5e470b925\" (UID: \"fdd5fd9c-aeca-4fcd-9efa-f0d5e470b925\") " Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.880505 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4564e1a-9135-4edd-842b-e4954834ae5d-operator-scripts\") pod \"d4564e1a-9135-4edd-842b-e4954834ae5d\" (UID: \"d4564e1a-9135-4edd-842b-e4954834ae5d\") " Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.880629 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdd5fd9c-aeca-4fcd-9efa-f0d5e470b925-operator-scripts\") pod \"fdd5fd9c-aeca-4fcd-9efa-f0d5e470b925\" (UID: \"fdd5fd9c-aeca-4fcd-9efa-f0d5e470b925\") " Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.881087 4520 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88a57446-d8a7-45ce-ac2a-1704429731a7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.881101 4520 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ae04536-592c-4d7c-bbeb-8ef1df3370a7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.881128 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vs8gl\" (UniqueName: \"kubernetes.io/projected/88a57446-d8a7-45ce-ac2a-1704429731a7-kube-api-access-vs8gl\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.881138 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvbzn\" (UniqueName: \"kubernetes.io/projected/7ae04536-592c-4d7c-bbeb-8ef1df3370a7-kube-api-access-nvbzn\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.881549 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdd5fd9c-aeca-4fcd-9efa-f0d5e470b925-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fdd5fd9c-aeca-4fcd-9efa-f0d5e470b925" (UID: "fdd5fd9c-aeca-4fcd-9efa-f0d5e470b925"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.881636 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1de5d64c-937a-41c9-b68c-8832b18aabf1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1de5d64c-937a-41c9-b68c-8832b18aabf1" (UID: "1de5d64c-937a-41c9-b68c-8832b18aabf1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.882045 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-1396-account-create-update-qtf79" event={"ID":"88a57446-d8a7-45ce-ac2a-1704429731a7","Type":"ContainerDied","Data":"99a07125254a7060bf0ea31542d000a513400353138d66c371d3895b10037840"} Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.882071 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99a07125254a7060bf0ea31542d000a513400353138d66c371d3895b10037840" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.882232 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/839c4efd-2ebb-43d0-9bdb-8dcd83737a8a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "839c4efd-2ebb-43d0-9bdb-8dcd83737a8a" (UID: "839c4efd-2ebb-43d0-9bdb-8dcd83737a8a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.882349 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-1396-account-create-update-qtf79" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.883848 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4564e1a-9135-4edd-842b-e4954834ae5d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d4564e1a-9135-4edd-842b-e4954834ae5d" (UID: "d4564e1a-9135-4edd-842b-e4954834ae5d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.885700 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdd5fd9c-aeca-4fcd-9efa-f0d5e470b925-kube-api-access-79d2z" (OuterVolumeSpecName: "kube-api-access-79d2z") pod "fdd5fd9c-aeca-4fcd-9efa-f0d5e470b925" (UID: "fdd5fd9c-aeca-4fcd-9efa-f0d5e470b925"). InnerVolumeSpecName "kube-api-access-79d2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.887750 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/839c4efd-2ebb-43d0-9bdb-8dcd83737a8a-kube-api-access-dwfsd" (OuterVolumeSpecName: "kube-api-access-dwfsd") pod "839c4efd-2ebb-43d0-9bdb-8dcd83737a8a" (UID: "839c4efd-2ebb-43d0-9bdb-8dcd83737a8a"). InnerVolumeSpecName "kube-api-access-dwfsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.889386 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qccxn" event={"ID":"d4564e1a-9135-4edd-842b-e4954834ae5d","Type":"ContainerDied","Data":"32fcaf8f5123a65a22314e1dd2e45cb383907a2fd6469652c80e6a3e05a9bc84"} Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.889408 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32fcaf8f5123a65a22314e1dd2e45cb383907a2fd6469652c80e6a3e05a9bc84" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.889500 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qccxn" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.890443 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4564e1a-9135-4edd-842b-e4954834ae5d-kube-api-access-hqfjg" (OuterVolumeSpecName: "kube-api-access-hqfjg") pod "d4564e1a-9135-4edd-842b-e4954834ae5d" (UID: "d4564e1a-9135-4edd-842b-e4954834ae5d"). InnerVolumeSpecName "kube-api-access-hqfjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.894050 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1de5d64c-937a-41c9-b68c-8832b18aabf1-kube-api-access-h5nc5" (OuterVolumeSpecName: "kube-api-access-h5nc5") pod "1de5d64c-937a-41c9-b68c-8832b18aabf1" (UID: "1de5d64c-937a-41c9-b68c-8832b18aabf1"). InnerVolumeSpecName "kube-api-access-h5nc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.897721 4520 generic.go:334] "Generic (PLEG): container finished" podID="d4fdb7e3-5390-4912-8331-36f326f97d7c" containerID="269bfec4b7622aa7423843d986a593d6b111a79ffae4958811e2e3431f60f5bc" exitCode=0 Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.897757 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd8468b69-r99hr" event={"ID":"d4fdb7e3-5390-4912-8331-36f326f97d7c","Type":"ContainerDied","Data":"269bfec4b7622aa7423843d986a593d6b111a79ffae4958811e2e3431f60f5bc"} Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.985718 4520 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdd5fd9c-aeca-4fcd-9efa-f0d5e470b925-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.985886 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwfsd\" (UniqueName: \"kubernetes.io/projected/839c4efd-2ebb-43d0-9bdb-8dcd83737a8a-kube-api-access-dwfsd\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.986099 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd8468b69-r99hr" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.986195 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5nc5\" (UniqueName: \"kubernetes.io/projected/1de5d64c-937a-41c9-b68c-8832b18aabf1-kube-api-access-h5nc5\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.986272 4520 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/839c4efd-2ebb-43d0-9bdb-8dcd83737a8a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.986332 4520 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1de5d64c-937a-41c9-b68c-8832b18aabf1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.987560 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqfjg\" (UniqueName: \"kubernetes.io/projected/d4564e1a-9135-4edd-842b-e4954834ae5d-kube-api-access-hqfjg\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.987602 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79d2z\" (UniqueName: \"kubernetes.io/projected/fdd5fd9c-aeca-4fcd-9efa-f0d5e470b925-kube-api-access-79d2z\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:44 crc kubenswrapper[4520]: I0130 06:59:44.987618 4520 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4564e1a-9135-4edd-842b-e4954834ae5d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:45 crc kubenswrapper[4520]: I0130 06:59:45.088888 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbr69\" (UniqueName: \"kubernetes.io/projected/d4fdb7e3-5390-4912-8331-36f326f97d7c-kube-api-access-nbr69\") pod \"d4fdb7e3-5390-4912-8331-36f326f97d7c\" (UID: \"d4fdb7e3-5390-4912-8331-36f326f97d7c\") " Jan 30 06:59:45 crc kubenswrapper[4520]: I0130 06:59:45.088995 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4fdb7e3-5390-4912-8331-36f326f97d7c-ovsdbserver-sb\") pod \"d4fdb7e3-5390-4912-8331-36f326f97d7c\" (UID: \"d4fdb7e3-5390-4912-8331-36f326f97d7c\") " Jan 30 06:59:45 crc kubenswrapper[4520]: I0130 06:59:45.089156 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4fdb7e3-5390-4912-8331-36f326f97d7c-ovsdbserver-nb\") pod \"d4fdb7e3-5390-4912-8331-36f326f97d7c\" (UID: \"d4fdb7e3-5390-4912-8331-36f326f97d7c\") " Jan 30 06:59:45 crc kubenswrapper[4520]: I0130 06:59:45.089302 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4fdb7e3-5390-4912-8331-36f326f97d7c-config\") pod \"d4fdb7e3-5390-4912-8331-36f326f97d7c\" (UID: \"d4fdb7e3-5390-4912-8331-36f326f97d7c\") " Jan 30 06:59:45 crc kubenswrapper[4520]: I0130 06:59:45.089345 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4fdb7e3-5390-4912-8331-36f326f97d7c-dns-svc\") pod \"d4fdb7e3-5390-4912-8331-36f326f97d7c\" (UID: \"d4fdb7e3-5390-4912-8331-36f326f97d7c\") " Jan 30 06:59:45 crc kubenswrapper[4520]: I0130 06:59:45.093633 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4fdb7e3-5390-4912-8331-36f326f97d7c-kube-api-access-nbr69" (OuterVolumeSpecName: "kube-api-access-nbr69") pod "d4fdb7e3-5390-4912-8331-36f326f97d7c" (UID: "d4fdb7e3-5390-4912-8331-36f326f97d7c"). InnerVolumeSpecName "kube-api-access-nbr69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:59:45 crc kubenswrapper[4520]: I0130 06:59:45.138281 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4fdb7e3-5390-4912-8331-36f326f97d7c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d4fdb7e3-5390-4912-8331-36f326f97d7c" (UID: "d4fdb7e3-5390-4912-8331-36f326f97d7c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:59:45 crc kubenswrapper[4520]: I0130 06:59:45.144772 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4fdb7e3-5390-4912-8331-36f326f97d7c-config" (OuterVolumeSpecName: "config") pod "d4fdb7e3-5390-4912-8331-36f326f97d7c" (UID: "d4fdb7e3-5390-4912-8331-36f326f97d7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:59:45 crc kubenswrapper[4520]: I0130 06:59:45.149914 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4fdb7e3-5390-4912-8331-36f326f97d7c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d4fdb7e3-5390-4912-8331-36f326f97d7c" (UID: "d4fdb7e3-5390-4912-8331-36f326f97d7c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:59:45 crc kubenswrapper[4520]: I0130 06:59:45.156321 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4fdb7e3-5390-4912-8331-36f326f97d7c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d4fdb7e3-5390-4912-8331-36f326f97d7c" (UID: "d4fdb7e3-5390-4912-8331-36f326f97d7c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:59:45 crc kubenswrapper[4520]: I0130 06:59:45.192814 4520 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4fdb7e3-5390-4912-8331-36f326f97d7c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:45 crc kubenswrapper[4520]: I0130 06:59:45.192913 4520 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4fdb7e3-5390-4912-8331-36f326f97d7c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:45 crc kubenswrapper[4520]: I0130 06:59:45.192981 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4fdb7e3-5390-4912-8331-36f326f97d7c-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:45 crc kubenswrapper[4520]: I0130 06:59:45.193049 4520 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4fdb7e3-5390-4912-8331-36f326f97d7c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:45 crc kubenswrapper[4520]: I0130 06:59:45.193107 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbr69\" (UniqueName: \"kubernetes.io/projected/d4fdb7e3-5390-4912-8331-36f326f97d7c-kube-api-access-nbr69\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:45 crc kubenswrapper[4520]: I0130 06:59:45.912461 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-k9p6j" event={"ID":"1de5d64c-937a-41c9-b68c-8832b18aabf1","Type":"ContainerDied","Data":"82e2be413ad27bc1286fcab8b1f5a20607186dcff31795e563251c0752adcb13"} Jan 30 06:59:45 crc kubenswrapper[4520]: I0130 06:59:45.912535 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82e2be413ad27bc1286fcab8b1f5a20607186dcff31795e563251c0752adcb13" Jan 30 06:59:45 crc kubenswrapper[4520]: I0130 06:59:45.912503 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-k9p6j" Jan 30 06:59:45 crc kubenswrapper[4520]: I0130 06:59:45.915619 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd8468b69-r99hr" event={"ID":"d4fdb7e3-5390-4912-8331-36f326f97d7c","Type":"ContainerDied","Data":"ed40fe0a23169aea0c2aa9b155854b54ea17e75a347c8b94d30473b81c8ed399"} Jan 30 06:59:45 crc kubenswrapper[4520]: I0130 06:59:45.915700 4520 scope.go:117] "RemoveContainer" containerID="269bfec4b7622aa7423843d986a593d6b111a79ffae4958811e2e3431f60f5bc" Jan 30 06:59:45 crc kubenswrapper[4520]: I0130 06:59:45.915718 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd8468b69-r99hr" Jan 30 06:59:45 crc kubenswrapper[4520]: I0130 06:59:45.965502 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd8468b69-r99hr"] Jan 30 06:59:45 crc kubenswrapper[4520]: I0130 06:59:45.971211 4520 scope.go:117] "RemoveContainer" containerID="3e8493e02642e4746126afe47a4c4e5277c49f14e2050b01ab0eaafa48569ab5" Jan 30 06:59:45 crc kubenswrapper[4520]: I0130 06:59:45.972611 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cd8468b69-r99hr"] Jan 30 06:59:46 crc kubenswrapper[4520]: I0130 06:59:46.705399 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4fdb7e3-5390-4912-8331-36f326f97d7c" path="/var/lib/kubelet/pods/d4fdb7e3-5390-4912-8331-36f326f97d7c/volumes" Jan 30 06:59:49 crc kubenswrapper[4520]: I0130 06:59:49.954879 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qhxqf" event={"ID":"f732baff-71b8-4edc-8ec9-ebf30a096f74","Type":"ContainerStarted","Data":"fc868ea9c3011014ed19a09e1b9da359fb2a8407c9da21c4e31470935afbf713"} Jan 30 06:59:51 crc kubenswrapper[4520]: I0130 06:59:51.978356 4520 generic.go:334] "Generic (PLEG): container finished" podID="f732baff-71b8-4edc-8ec9-ebf30a096f74" containerID="fc868ea9c3011014ed19a09e1b9da359fb2a8407c9da21c4e31470935afbf713" exitCode=0 Jan 30 06:59:51 crc kubenswrapper[4520]: I0130 06:59:51.978835 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qhxqf" event={"ID":"f732baff-71b8-4edc-8ec9-ebf30a096f74","Type":"ContainerDied","Data":"fc868ea9c3011014ed19a09e1b9da359fb2a8407c9da21c4e31470935afbf713"} Jan 30 06:59:53 crc kubenswrapper[4520]: I0130 06:59:53.427011 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qhxqf" Jan 30 06:59:53 crc kubenswrapper[4520]: I0130 06:59:53.578955 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzfkt\" (UniqueName: \"kubernetes.io/projected/f732baff-71b8-4edc-8ec9-ebf30a096f74-kube-api-access-wzfkt\") pod \"f732baff-71b8-4edc-8ec9-ebf30a096f74\" (UID: \"f732baff-71b8-4edc-8ec9-ebf30a096f74\") " Jan 30 06:59:53 crc kubenswrapper[4520]: I0130 06:59:53.579210 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f732baff-71b8-4edc-8ec9-ebf30a096f74-combined-ca-bundle\") pod \"f732baff-71b8-4edc-8ec9-ebf30a096f74\" (UID: \"f732baff-71b8-4edc-8ec9-ebf30a096f74\") " Jan 30 06:59:53 crc kubenswrapper[4520]: I0130 06:59:53.579258 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f732baff-71b8-4edc-8ec9-ebf30a096f74-config-data\") pod \"f732baff-71b8-4edc-8ec9-ebf30a096f74\" (UID: \"f732baff-71b8-4edc-8ec9-ebf30a096f74\") " Jan 30 06:59:53 crc kubenswrapper[4520]: I0130 06:59:53.584636 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f732baff-71b8-4edc-8ec9-ebf30a096f74-kube-api-access-wzfkt" (OuterVolumeSpecName: "kube-api-access-wzfkt") pod "f732baff-71b8-4edc-8ec9-ebf30a096f74" (UID: "f732baff-71b8-4edc-8ec9-ebf30a096f74"). InnerVolumeSpecName "kube-api-access-wzfkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:59:53 crc kubenswrapper[4520]: I0130 06:59:53.602702 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f732baff-71b8-4edc-8ec9-ebf30a096f74-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f732baff-71b8-4edc-8ec9-ebf30a096f74" (UID: "f732baff-71b8-4edc-8ec9-ebf30a096f74"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:59:53 crc kubenswrapper[4520]: I0130 06:59:53.614141 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f732baff-71b8-4edc-8ec9-ebf30a096f74-config-data" (OuterVolumeSpecName: "config-data") pod "f732baff-71b8-4edc-8ec9-ebf30a096f74" (UID: "f732baff-71b8-4edc-8ec9-ebf30a096f74"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:59:53 crc kubenswrapper[4520]: I0130 06:59:53.682101 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzfkt\" (UniqueName: \"kubernetes.io/projected/f732baff-71b8-4edc-8ec9-ebf30a096f74-kube-api-access-wzfkt\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:53 crc kubenswrapper[4520]: I0130 06:59:53.682142 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f732baff-71b8-4edc-8ec9-ebf30a096f74-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:53 crc kubenswrapper[4520]: I0130 06:59:53.682154 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f732baff-71b8-4edc-8ec9-ebf30a096f74-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:53 crc kubenswrapper[4520]: I0130 06:59:53.999440 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qhxqf" event={"ID":"f732baff-71b8-4edc-8ec9-ebf30a096f74","Type":"ContainerDied","Data":"f4332c523d6526dc0f7a408de6b35f681f31ddd07087a6631752bfd6abdb9ebb"} Jan 30 06:59:53 crc kubenswrapper[4520]: I0130 06:59:53.999497 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4332c523d6526dc0f7a408de6b35f681f31ddd07087a6631752bfd6abdb9ebb" Jan 30 06:59:53 crc kubenswrapper[4520]: I0130 06:59:53.999511 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qhxqf" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.302956 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-g6448"] Jan 30 06:59:54 crc kubenswrapper[4520]: E0130 06:59:54.303398 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4fdb7e3-5390-4912-8331-36f326f97d7c" containerName="dnsmasq-dns" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.303418 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4fdb7e3-5390-4912-8331-36f326f97d7c" containerName="dnsmasq-dns" Jan 30 06:59:54 crc kubenswrapper[4520]: E0130 06:59:54.303442 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c29e2dd5-25c0-4c49-8d73-30db73b5bc36" containerName="mariadb-account-create-update" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.303450 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="c29e2dd5-25c0-4c49-8d73-30db73b5bc36" containerName="mariadb-account-create-update" Jan 30 06:59:54 crc kubenswrapper[4520]: E0130 06:59:54.303464 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f732baff-71b8-4edc-8ec9-ebf30a096f74" containerName="keystone-db-sync" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.303469 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="f732baff-71b8-4edc-8ec9-ebf30a096f74" containerName="keystone-db-sync" Jan 30 06:59:54 crc kubenswrapper[4520]: E0130 06:59:54.303478 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88a57446-d8a7-45ce-ac2a-1704429731a7" containerName="mariadb-account-create-update" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.303484 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="88a57446-d8a7-45ce-ac2a-1704429731a7" containerName="mariadb-account-create-update" Jan 30 06:59:54 crc kubenswrapper[4520]: E0130 06:59:54.303494 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ae04536-592c-4d7c-bbeb-8ef1df3370a7" containerName="mariadb-database-create" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.303499 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ae04536-592c-4d7c-bbeb-8ef1df3370a7" containerName="mariadb-database-create" Jan 30 06:59:54 crc kubenswrapper[4520]: E0130 06:59:54.303526 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4564e1a-9135-4edd-842b-e4954834ae5d" containerName="mariadb-database-create" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.303532 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4564e1a-9135-4edd-842b-e4954834ae5d" containerName="mariadb-database-create" Jan 30 06:59:54 crc kubenswrapper[4520]: E0130 06:59:54.303544 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd5fd9c-aeca-4fcd-9efa-f0d5e470b925" containerName="mariadb-account-create-update" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.303551 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd5fd9c-aeca-4fcd-9efa-f0d5e470b925" containerName="mariadb-account-create-update" Jan 30 06:59:54 crc kubenswrapper[4520]: E0130 06:59:54.303561 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="839c4efd-2ebb-43d0-9bdb-8dcd83737a8a" containerName="mariadb-account-create-update" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.303567 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="839c4efd-2ebb-43d0-9bdb-8dcd83737a8a" containerName="mariadb-account-create-update" Jan 30 06:59:54 crc kubenswrapper[4520]: E0130 06:59:54.303577 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1de5d64c-937a-41c9-b68c-8832b18aabf1" containerName="mariadb-database-create" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.303582 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de5d64c-937a-41c9-b68c-8832b18aabf1" containerName="mariadb-database-create" Jan 30 06:59:54 crc kubenswrapper[4520]: E0130 06:59:54.303592 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4fdb7e3-5390-4912-8331-36f326f97d7c" containerName="init" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.303598 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4fdb7e3-5390-4912-8331-36f326f97d7c" containerName="init" Jan 30 06:59:54 crc kubenswrapper[4520]: E0130 06:59:54.303605 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db4d1798-73e8-4315-87d5-e638d87abfd5" containerName="mariadb-database-create" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.303612 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="db4d1798-73e8-4315-87d5-e638d87abfd5" containerName="mariadb-database-create" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.303767 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="f732baff-71b8-4edc-8ec9-ebf30a096f74" containerName="keystone-db-sync" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.303778 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4564e1a-9135-4edd-842b-e4954834ae5d" containerName="mariadb-database-create" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.303787 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ae04536-592c-4d7c-bbeb-8ef1df3370a7" containerName="mariadb-database-create" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.303795 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="88a57446-d8a7-45ce-ac2a-1704429731a7" containerName="mariadb-account-create-update" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.303802 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdd5fd9c-aeca-4fcd-9efa-f0d5e470b925" containerName="mariadb-account-create-update" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.303809 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="1de5d64c-937a-41c9-b68c-8832b18aabf1" containerName="mariadb-database-create" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.303822 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4fdb7e3-5390-4912-8331-36f326f97d7c" containerName="dnsmasq-dns" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.303829 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="839c4efd-2ebb-43d0-9bdb-8dcd83737a8a" containerName="mariadb-account-create-update" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.303839 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="c29e2dd5-25c0-4c49-8d73-30db73b5bc36" containerName="mariadb-account-create-update" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.303848 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="db4d1798-73e8-4315-87d5-e638d87abfd5" containerName="mariadb-database-create" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.304469 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g6448" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.307076 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.307420 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.307573 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.307702 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.308380 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jddpd" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.315383 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-866bd8c4c5-qzvrc"] Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.317121 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-866bd8c4c5-qzvrc" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.346342 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-866bd8c4c5-qzvrc"] Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.362806 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-g6448"] Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.396449 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57f9v\" (UniqueName: \"kubernetes.io/projected/3444bf87-4258-4f03-81fb-2a6d8af3ccc6-kube-api-access-57f9v\") pod \"dnsmasq-dns-866bd8c4c5-qzvrc\" (UID: \"3444bf87-4258-4f03-81fb-2a6d8af3ccc6\") " pod="openstack/dnsmasq-dns-866bd8c4c5-qzvrc" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.396708 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df6d9500-f0bf-4aff-a6d9-86fcdc982d6c-scripts\") pod \"keystone-bootstrap-g6448\" (UID: \"df6d9500-f0bf-4aff-a6d9-86fcdc982d6c\") " pod="openstack/keystone-bootstrap-g6448" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.396800 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3444bf87-4258-4f03-81fb-2a6d8af3ccc6-dns-swift-storage-0\") pod \"dnsmasq-dns-866bd8c4c5-qzvrc\" (UID: \"3444bf87-4258-4f03-81fb-2a6d8af3ccc6\") " pod="openstack/dnsmasq-dns-866bd8c4c5-qzvrc" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.396915 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df6d9500-f0bf-4aff-a6d9-86fcdc982d6c-combined-ca-bundle\") pod \"keystone-bootstrap-g6448\" (UID: \"df6d9500-f0bf-4aff-a6d9-86fcdc982d6c\") " pod="openstack/keystone-bootstrap-g6448" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.396999 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3444bf87-4258-4f03-81fb-2a6d8af3ccc6-dns-svc\") pod \"dnsmasq-dns-866bd8c4c5-qzvrc\" (UID: \"3444bf87-4258-4f03-81fb-2a6d8af3ccc6\") " pod="openstack/dnsmasq-dns-866bd8c4c5-qzvrc" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.397067 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df6d9500-f0bf-4aff-a6d9-86fcdc982d6c-credential-keys\") pod \"keystone-bootstrap-g6448\" (UID: \"df6d9500-f0bf-4aff-a6d9-86fcdc982d6c\") " pod="openstack/keystone-bootstrap-g6448" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.397156 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lml8n\" (UniqueName: \"kubernetes.io/projected/df6d9500-f0bf-4aff-a6d9-86fcdc982d6c-kube-api-access-lml8n\") pod \"keystone-bootstrap-g6448\" (UID: \"df6d9500-f0bf-4aff-a6d9-86fcdc982d6c\") " pod="openstack/keystone-bootstrap-g6448" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.397248 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3444bf87-4258-4f03-81fb-2a6d8af3ccc6-ovsdbserver-nb\") pod \"dnsmasq-dns-866bd8c4c5-qzvrc\" (UID: \"3444bf87-4258-4f03-81fb-2a6d8af3ccc6\") " pod="openstack/dnsmasq-dns-866bd8c4c5-qzvrc" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.397312 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df6d9500-f0bf-4aff-a6d9-86fcdc982d6c-config-data\") pod \"keystone-bootstrap-g6448\" (UID: \"df6d9500-f0bf-4aff-a6d9-86fcdc982d6c\") " pod="openstack/keystone-bootstrap-g6448" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.397399 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3444bf87-4258-4f03-81fb-2a6d8af3ccc6-config\") pod \"dnsmasq-dns-866bd8c4c5-qzvrc\" (UID: \"3444bf87-4258-4f03-81fb-2a6d8af3ccc6\") " pod="openstack/dnsmasq-dns-866bd8c4c5-qzvrc" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.397470 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df6d9500-f0bf-4aff-a6d9-86fcdc982d6c-fernet-keys\") pod \"keystone-bootstrap-g6448\" (UID: \"df6d9500-f0bf-4aff-a6d9-86fcdc982d6c\") " pod="openstack/keystone-bootstrap-g6448" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.397567 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3444bf87-4258-4f03-81fb-2a6d8af3ccc6-ovsdbserver-sb\") pod \"dnsmasq-dns-866bd8c4c5-qzvrc\" (UID: \"3444bf87-4258-4f03-81fb-2a6d8af3ccc6\") " pod="openstack/dnsmasq-dns-866bd8c4c5-qzvrc" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.480177 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-qndsg"] Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.487577 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-qndsg" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.493736 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.498850 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57f9v\" (UniqueName: \"kubernetes.io/projected/3444bf87-4258-4f03-81fb-2a6d8af3ccc6-kube-api-access-57f9v\") pod \"dnsmasq-dns-866bd8c4c5-qzvrc\" (UID: \"3444bf87-4258-4f03-81fb-2a6d8af3ccc6\") " pod="openstack/dnsmasq-dns-866bd8c4c5-qzvrc" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.499034 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df6d9500-f0bf-4aff-a6d9-86fcdc982d6c-scripts\") pod \"keystone-bootstrap-g6448\" (UID: \"df6d9500-f0bf-4aff-a6d9-86fcdc982d6c\") " pod="openstack/keystone-bootstrap-g6448" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.499125 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3444bf87-4258-4f03-81fb-2a6d8af3ccc6-dns-swift-storage-0\") pod \"dnsmasq-dns-866bd8c4c5-qzvrc\" (UID: \"3444bf87-4258-4f03-81fb-2a6d8af3ccc6\") " pod="openstack/dnsmasq-dns-866bd8c4c5-qzvrc" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.499192 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df6d9500-f0bf-4aff-a6d9-86fcdc982d6c-combined-ca-bundle\") pod \"keystone-bootstrap-g6448\" (UID: \"df6d9500-f0bf-4aff-a6d9-86fcdc982d6c\") " pod="openstack/keystone-bootstrap-g6448" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.499263 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3444bf87-4258-4f03-81fb-2a6d8af3ccc6-dns-svc\") pod \"dnsmasq-dns-866bd8c4c5-qzvrc\" (UID: \"3444bf87-4258-4f03-81fb-2a6d8af3ccc6\") " pod="openstack/dnsmasq-dns-866bd8c4c5-qzvrc" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.499333 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df6d9500-f0bf-4aff-a6d9-86fcdc982d6c-credential-keys\") pod \"keystone-bootstrap-g6448\" (UID: \"df6d9500-f0bf-4aff-a6d9-86fcdc982d6c\") " pod="openstack/keystone-bootstrap-g6448" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.499423 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lml8n\" (UniqueName: \"kubernetes.io/projected/df6d9500-f0bf-4aff-a6d9-86fcdc982d6c-kube-api-access-lml8n\") pod \"keystone-bootstrap-g6448\" (UID: \"df6d9500-f0bf-4aff-a6d9-86fcdc982d6c\") " pod="openstack/keystone-bootstrap-g6448" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.499557 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df6d9500-f0bf-4aff-a6d9-86fcdc982d6c-config-data\") pod \"keystone-bootstrap-g6448\" (UID: \"df6d9500-f0bf-4aff-a6d9-86fcdc982d6c\") " pod="openstack/keystone-bootstrap-g6448" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.499646 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3444bf87-4258-4f03-81fb-2a6d8af3ccc6-ovsdbserver-nb\") pod \"dnsmasq-dns-866bd8c4c5-qzvrc\" (UID: \"3444bf87-4258-4f03-81fb-2a6d8af3ccc6\") " pod="openstack/dnsmasq-dns-866bd8c4c5-qzvrc" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.499786 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3444bf87-4258-4f03-81fb-2a6d8af3ccc6-config\") pod \"dnsmasq-dns-866bd8c4c5-qzvrc\" (UID: \"3444bf87-4258-4f03-81fb-2a6d8af3ccc6\") " pod="openstack/dnsmasq-dns-866bd8c4c5-qzvrc" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.499859 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df6d9500-f0bf-4aff-a6d9-86fcdc982d6c-fernet-keys\") pod \"keystone-bootstrap-g6448\" (UID: \"df6d9500-f0bf-4aff-a6d9-86fcdc982d6c\") " pod="openstack/keystone-bootstrap-g6448" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.499953 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3444bf87-4258-4f03-81fb-2a6d8af3ccc6-ovsdbserver-sb\") pod \"dnsmasq-dns-866bd8c4c5-qzvrc\" (UID: \"3444bf87-4258-4f03-81fb-2a6d8af3ccc6\") " pod="openstack/dnsmasq-dns-866bd8c4c5-qzvrc" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.500183 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3444bf87-4258-4f03-81fb-2a6d8af3ccc6-dns-swift-storage-0\") pod \"dnsmasq-dns-866bd8c4c5-qzvrc\" (UID: \"3444bf87-4258-4f03-81fb-2a6d8af3ccc6\") " pod="openstack/dnsmasq-dns-866bd8c4c5-qzvrc" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.501526 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3444bf87-4258-4f03-81fb-2a6d8af3ccc6-ovsdbserver-sb\") pod \"dnsmasq-dns-866bd8c4c5-qzvrc\" (UID: \"3444bf87-4258-4f03-81fb-2a6d8af3ccc6\") " pod="openstack/dnsmasq-dns-866bd8c4c5-qzvrc" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.501638 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3444bf87-4258-4f03-81fb-2a6d8af3ccc6-config\") pod \"dnsmasq-dns-866bd8c4c5-qzvrc\" (UID: \"3444bf87-4258-4f03-81fb-2a6d8af3ccc6\") " pod="openstack/dnsmasq-dns-866bd8c4c5-qzvrc" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.502197 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3444bf87-4258-4f03-81fb-2a6d8af3ccc6-ovsdbserver-nb\") pod \"dnsmasq-dns-866bd8c4c5-qzvrc\" (UID: \"3444bf87-4258-4f03-81fb-2a6d8af3ccc6\") " pod="openstack/dnsmasq-dns-866bd8c4c5-qzvrc" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.502645 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3444bf87-4258-4f03-81fb-2a6d8af3ccc6-dns-svc\") pod \"dnsmasq-dns-866bd8c4c5-qzvrc\" (UID: \"3444bf87-4258-4f03-81fb-2a6d8af3ccc6\") " pod="openstack/dnsmasq-dns-866bd8c4c5-qzvrc" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.505713 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df6d9500-f0bf-4aff-a6d9-86fcdc982d6c-combined-ca-bundle\") pod \"keystone-bootstrap-g6448\" (UID: \"df6d9500-f0bf-4aff-a6d9-86fcdc982d6c\") " pod="openstack/keystone-bootstrap-g6448" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.508726 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-spr62" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.512111 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df6d9500-f0bf-4aff-a6d9-86fcdc982d6c-fernet-keys\") pod \"keystone-bootstrap-g6448\" (UID: \"df6d9500-f0bf-4aff-a6d9-86fcdc982d6c\") " pod="openstack/keystone-bootstrap-g6448" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.513427 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df6d9500-f0bf-4aff-a6d9-86fcdc982d6c-scripts\") pod \"keystone-bootstrap-g6448\" (UID: \"df6d9500-f0bf-4aff-a6d9-86fcdc982d6c\") " pod="openstack/keystone-bootstrap-g6448" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.525355 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df6d9500-f0bf-4aff-a6d9-86fcdc982d6c-credential-keys\") pod \"keystone-bootstrap-g6448\" (UID: \"df6d9500-f0bf-4aff-a6d9-86fcdc982d6c\") " pod="openstack/keystone-bootstrap-g6448" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.532023 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df6d9500-f0bf-4aff-a6d9-86fcdc982d6c-config-data\") pod \"keystone-bootstrap-g6448\" (UID: \"df6d9500-f0bf-4aff-a6d9-86fcdc982d6c\") " pod="openstack/keystone-bootstrap-g6448" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.532621 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-qndsg"] Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.557214 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lml8n\" (UniqueName: \"kubernetes.io/projected/df6d9500-f0bf-4aff-a6d9-86fcdc982d6c-kube-api-access-lml8n\") pod \"keystone-bootstrap-g6448\" (UID: \"df6d9500-f0bf-4aff-a6d9-86fcdc982d6c\") " pod="openstack/keystone-bootstrap-g6448" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.571881 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57f9v\" (UniqueName: \"kubernetes.io/projected/3444bf87-4258-4f03-81fb-2a6d8af3ccc6-kube-api-access-57f9v\") pod \"dnsmasq-dns-866bd8c4c5-qzvrc\" (UID: \"3444bf87-4258-4f03-81fb-2a6d8af3ccc6\") " pod="openstack/dnsmasq-dns-866bd8c4c5-qzvrc" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.604858 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7zpp\" (UniqueName: \"kubernetes.io/projected/1771d5c5-4904-435a-81ac-80eaaf23bc68-kube-api-access-c7zpp\") pod \"heat-db-sync-qndsg\" (UID: \"1771d5c5-4904-435a-81ac-80eaaf23bc68\") " pod="openstack/heat-db-sync-qndsg" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.604989 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1771d5c5-4904-435a-81ac-80eaaf23bc68-combined-ca-bundle\") pod \"heat-db-sync-qndsg\" (UID: \"1771d5c5-4904-435a-81ac-80eaaf23bc68\") " pod="openstack/heat-db-sync-qndsg" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.605120 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1771d5c5-4904-435a-81ac-80eaaf23bc68-config-data\") pod \"heat-db-sync-qndsg\" (UID: \"1771d5c5-4904-435a-81ac-80eaaf23bc68\") " pod="openstack/heat-db-sync-qndsg" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.619417 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g6448" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.632240 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-866bd8c4c5-qzvrc" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.633465 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7ccf6f8c8c-g5kgh"] Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.634841 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7ccf6f8c8c-g5kgh" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.641392 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.642001 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.642238 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-kfwf9" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.660677 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.709290 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtdvq\" (UniqueName: \"kubernetes.io/projected/a9ba792b-9c9d-4e3a-ae77-22c24f473037-kube-api-access-dtdvq\") pod \"horizon-7ccf6f8c8c-g5kgh\" (UID: \"a9ba792b-9c9d-4e3a-ae77-22c24f473037\") " pod="openstack/horizon-7ccf6f8c8c-g5kgh" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.709361 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1771d5c5-4904-435a-81ac-80eaaf23bc68-config-data\") pod \"heat-db-sync-qndsg\" (UID: \"1771d5c5-4904-435a-81ac-80eaaf23bc68\") " pod="openstack/heat-db-sync-qndsg" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.709397 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9ba792b-9c9d-4e3a-ae77-22c24f473037-logs\") pod \"horizon-7ccf6f8c8c-g5kgh\" (UID: \"a9ba792b-9c9d-4e3a-ae77-22c24f473037\") " pod="openstack/horizon-7ccf6f8c8c-g5kgh" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.709449 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9ba792b-9c9d-4e3a-ae77-22c24f473037-scripts\") pod \"horizon-7ccf6f8c8c-g5kgh\" (UID: \"a9ba792b-9c9d-4e3a-ae77-22c24f473037\") " pod="openstack/horizon-7ccf6f8c8c-g5kgh" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.709492 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a9ba792b-9c9d-4e3a-ae77-22c24f473037-horizon-secret-key\") pod \"horizon-7ccf6f8c8c-g5kgh\" (UID: \"a9ba792b-9c9d-4e3a-ae77-22c24f473037\") " pod="openstack/horizon-7ccf6f8c8c-g5kgh" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.709539 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7zpp\" (UniqueName: \"kubernetes.io/projected/1771d5c5-4904-435a-81ac-80eaaf23bc68-kube-api-access-c7zpp\") pod \"heat-db-sync-qndsg\" (UID: \"1771d5c5-4904-435a-81ac-80eaaf23bc68\") " pod="openstack/heat-db-sync-qndsg" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.709571 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9ba792b-9c9d-4e3a-ae77-22c24f473037-config-data\") pod \"horizon-7ccf6f8c8c-g5kgh\" (UID: \"a9ba792b-9c9d-4e3a-ae77-22c24f473037\") " pod="openstack/horizon-7ccf6f8c8c-g5kgh" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.709614 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1771d5c5-4904-435a-81ac-80eaaf23bc68-combined-ca-bundle\") pod \"heat-db-sync-qndsg\" (UID: \"1771d5c5-4904-435a-81ac-80eaaf23bc68\") " pod="openstack/heat-db-sync-qndsg" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.714537 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1771d5c5-4904-435a-81ac-80eaaf23bc68-combined-ca-bundle\") pod \"heat-db-sync-qndsg\" (UID: \"1771d5c5-4904-435a-81ac-80eaaf23bc68\") " pod="openstack/heat-db-sync-qndsg" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.731225 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1771d5c5-4904-435a-81ac-80eaaf23bc68-config-data\") pod \"heat-db-sync-qndsg\" (UID: \"1771d5c5-4904-435a-81ac-80eaaf23bc68\") " pod="openstack/heat-db-sync-qndsg" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.752066 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-xgsxk"] Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.753089 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xgsxk" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.759881 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.760151 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.760308 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-fj7s6" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.799600 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7zpp\" (UniqueName: \"kubernetes.io/projected/1771d5c5-4904-435a-81ac-80eaaf23bc68-kube-api-access-c7zpp\") pod \"heat-db-sync-qndsg\" (UID: \"1771d5c5-4904-435a-81ac-80eaaf23bc68\") " pod="openstack/heat-db-sync-qndsg" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.817672 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-xgsxk"] Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.822579 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9ba792b-9c9d-4e3a-ae77-22c24f473037-logs\") pod \"horizon-7ccf6f8c8c-g5kgh\" (UID: \"a9ba792b-9c9d-4e3a-ae77-22c24f473037\") " pod="openstack/horizon-7ccf6f8c8c-g5kgh" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.822743 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9ba792b-9c9d-4e3a-ae77-22c24f473037-scripts\") pod \"horizon-7ccf6f8c8c-g5kgh\" (UID: \"a9ba792b-9c9d-4e3a-ae77-22c24f473037\") " pod="openstack/horizon-7ccf6f8c8c-g5kgh" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.822788 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a9ba792b-9c9d-4e3a-ae77-22c24f473037-horizon-secret-key\") pod \"horizon-7ccf6f8c8c-g5kgh\" (UID: \"a9ba792b-9c9d-4e3a-ae77-22c24f473037\") " pod="openstack/horizon-7ccf6f8c8c-g5kgh" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.822863 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9ba792b-9c9d-4e3a-ae77-22c24f473037-config-data\") pod \"horizon-7ccf6f8c8c-g5kgh\" (UID: \"a9ba792b-9c9d-4e3a-ae77-22c24f473037\") " pod="openstack/horizon-7ccf6f8c8c-g5kgh" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.823007 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtdvq\" (UniqueName: \"kubernetes.io/projected/a9ba792b-9c9d-4e3a-ae77-22c24f473037-kube-api-access-dtdvq\") pod \"horizon-7ccf6f8c8c-g5kgh\" (UID: \"a9ba792b-9c9d-4e3a-ae77-22c24f473037\") " pod="openstack/horizon-7ccf6f8c8c-g5kgh" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.826670 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9ba792b-9c9d-4e3a-ae77-22c24f473037-config-data\") pod \"horizon-7ccf6f8c8c-g5kgh\" (UID: \"a9ba792b-9c9d-4e3a-ae77-22c24f473037\") " pod="openstack/horizon-7ccf6f8c8c-g5kgh" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.826948 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9ba792b-9c9d-4e3a-ae77-22c24f473037-logs\") pod \"horizon-7ccf6f8c8c-g5kgh\" (UID: \"a9ba792b-9c9d-4e3a-ae77-22c24f473037\") " pod="openstack/horizon-7ccf6f8c8c-g5kgh" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.827379 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9ba792b-9c9d-4e3a-ae77-22c24f473037-scripts\") pod \"horizon-7ccf6f8c8c-g5kgh\" (UID: \"a9ba792b-9c9d-4e3a-ae77-22c24f473037\") " pod="openstack/horizon-7ccf6f8c8c-g5kgh" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.846161 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7ccf6f8c8c-g5kgh"] Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.854261 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a9ba792b-9c9d-4e3a-ae77-22c24f473037-horizon-secret-key\") pod \"horizon-7ccf6f8c8c-g5kgh\" (UID: \"a9ba792b-9c9d-4e3a-ae77-22c24f473037\") " pod="openstack/horizon-7ccf6f8c8c-g5kgh" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.901702 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtdvq\" (UniqueName: \"kubernetes.io/projected/a9ba792b-9c9d-4e3a-ae77-22c24f473037-kube-api-access-dtdvq\") pod \"horizon-7ccf6f8c8c-g5kgh\" (UID: \"a9ba792b-9c9d-4e3a-ae77-22c24f473037\") " pod="openstack/horizon-7ccf6f8c8c-g5kgh" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.937615 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.938805 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfrzl\" (UniqueName: \"kubernetes.io/projected/fc2063bc-3a1e-4e9f-badc-299e256a2f3c-kube-api-access-mfrzl\") pod \"cinder-db-sync-xgsxk\" (UID: \"fc2063bc-3a1e-4e9f-badc-299e256a2f3c\") " pod="openstack/cinder-db-sync-xgsxk" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.938900 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc2063bc-3a1e-4e9f-badc-299e256a2f3c-combined-ca-bundle\") pod \"cinder-db-sync-xgsxk\" (UID: \"fc2063bc-3a1e-4e9f-badc-299e256a2f3c\") " pod="openstack/cinder-db-sync-xgsxk" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.938961 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc2063bc-3a1e-4e9f-badc-299e256a2f3c-config-data\") pod \"cinder-db-sync-xgsxk\" (UID: \"fc2063bc-3a1e-4e9f-badc-299e256a2f3c\") " pod="openstack/cinder-db-sync-xgsxk" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.939059 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fc2063bc-3a1e-4e9f-badc-299e256a2f3c-db-sync-config-data\") pod \"cinder-db-sync-xgsxk\" (UID: \"fc2063bc-3a1e-4e9f-badc-299e256a2f3c\") " pod="openstack/cinder-db-sync-xgsxk" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.939123 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fc2063bc-3a1e-4e9f-badc-299e256a2f3c-etc-machine-id\") pod \"cinder-db-sync-xgsxk\" (UID: \"fc2063bc-3a1e-4e9f-badc-299e256a2f3c\") " pod="openstack/cinder-db-sync-xgsxk" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.939165 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc2063bc-3a1e-4e9f-badc-299e256a2f3c-scripts\") pod \"cinder-db-sync-xgsxk\" (UID: \"fc2063bc-3a1e-4e9f-badc-299e256a2f3c\") " pod="openstack/cinder-db-sync-xgsxk" Jan 30 06:59:54 crc kubenswrapper[4520]: I0130 06:59:54.939726 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-qndsg" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:54.951588 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:54.960606 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:54.960727 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:54.964168 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7ccf6f8c8c-g5kgh" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:54.997691 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-866bd8c4c5-qzvrc"] Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.042294 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfrzl\" (UniqueName: \"kubernetes.io/projected/fc2063bc-3a1e-4e9f-badc-299e256a2f3c-kube-api-access-mfrzl\") pod \"cinder-db-sync-xgsxk\" (UID: \"fc2063bc-3a1e-4e9f-badc-299e256a2f3c\") " pod="openstack/cinder-db-sync-xgsxk" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.042614 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc2063bc-3a1e-4e9f-badc-299e256a2f3c-combined-ca-bundle\") pod \"cinder-db-sync-xgsxk\" (UID: \"fc2063bc-3a1e-4e9f-badc-299e256a2f3c\") " pod="openstack/cinder-db-sync-xgsxk" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.042644 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc2063bc-3a1e-4e9f-badc-299e256a2f3c-config-data\") pod \"cinder-db-sync-xgsxk\" (UID: \"fc2063bc-3a1e-4e9f-badc-299e256a2f3c\") " pod="openstack/cinder-db-sync-xgsxk" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.042695 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fc2063bc-3a1e-4e9f-badc-299e256a2f3c-db-sync-config-data\") pod \"cinder-db-sync-xgsxk\" (UID: \"fc2063bc-3a1e-4e9f-badc-299e256a2f3c\") " pod="openstack/cinder-db-sync-xgsxk" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.042719 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fc2063bc-3a1e-4e9f-badc-299e256a2f3c-etc-machine-id\") pod \"cinder-db-sync-xgsxk\" (UID: \"fc2063bc-3a1e-4e9f-badc-299e256a2f3c\") " pod="openstack/cinder-db-sync-xgsxk" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.042742 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc2063bc-3a1e-4e9f-badc-299e256a2f3c-scripts\") pod \"cinder-db-sync-xgsxk\" (UID: \"fc2063bc-3a1e-4e9f-badc-299e256a2f3c\") " pod="openstack/cinder-db-sync-xgsxk" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.045969 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fc2063bc-3a1e-4e9f-badc-299e256a2f3c-etc-machine-id\") pod \"cinder-db-sync-xgsxk\" (UID: \"fc2063bc-3a1e-4e9f-badc-299e256a2f3c\") " pod="openstack/cinder-db-sync-xgsxk" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.066317 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fc2063bc-3a1e-4e9f-badc-299e256a2f3c-db-sync-config-data\") pod \"cinder-db-sync-xgsxk\" (UID: \"fc2063bc-3a1e-4e9f-badc-299e256a2f3c\") " pod="openstack/cinder-db-sync-xgsxk" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.072636 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.075027 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc2063bc-3a1e-4e9f-badc-299e256a2f3c-combined-ca-bundle\") pod \"cinder-db-sync-xgsxk\" (UID: \"fc2063bc-3a1e-4e9f-badc-299e256a2f3c\") " pod="openstack/cinder-db-sync-xgsxk" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.084602 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc2063bc-3a1e-4e9f-badc-299e256a2f3c-config-data\") pod \"cinder-db-sync-xgsxk\" (UID: \"fc2063bc-3a1e-4e9f-badc-299e256a2f3c\") " pod="openstack/cinder-db-sync-xgsxk" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.085053 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfrzl\" (UniqueName: \"kubernetes.io/projected/fc2063bc-3a1e-4e9f-badc-299e256a2f3c-kube-api-access-mfrzl\") pod \"cinder-db-sync-xgsxk\" (UID: \"fc2063bc-3a1e-4e9f-badc-299e256a2f3c\") " pod="openstack/cinder-db-sync-xgsxk" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.097683 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc2063bc-3a1e-4e9f-badc-299e256a2f3c-scripts\") pod \"cinder-db-sync-xgsxk\" (UID: \"fc2063bc-3a1e-4e9f-badc-299e256a2f3c\") " pod="openstack/cinder-db-sync-xgsxk" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.098622 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c8c7b95dc-bv8zz"] Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.100225 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c8c7b95dc-bv8zz" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.146021 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4efe190c-047a-4463-9044-515816c2a7e1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4efe190c-047a-4463-9044-515816c2a7e1\") " pod="openstack/ceilometer-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.146092 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4efe190c-047a-4463-9044-515816c2a7e1-config-data\") pod \"ceilometer-0\" (UID: \"4efe190c-047a-4463-9044-515816c2a7e1\") " pod="openstack/ceilometer-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.146163 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4efe190c-047a-4463-9044-515816c2a7e1-log-httpd\") pod \"ceilometer-0\" (UID: \"4efe190c-047a-4463-9044-515816c2a7e1\") " pod="openstack/ceilometer-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.146218 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c8c7b95dc-bv8zz"] Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.146325 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4efe190c-047a-4463-9044-515816c2a7e1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4efe190c-047a-4463-9044-515816c2a7e1\") " pod="openstack/ceilometer-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.146398 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2ts2\" (UniqueName: \"kubernetes.io/projected/4efe190c-047a-4463-9044-515816c2a7e1-kube-api-access-s2ts2\") pod \"ceilometer-0\" (UID: \"4efe190c-047a-4463-9044-515816c2a7e1\") " pod="openstack/ceilometer-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.146420 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4efe190c-047a-4463-9044-515816c2a7e1-scripts\") pod \"ceilometer-0\" (UID: \"4efe190c-047a-4463-9044-515816c2a7e1\") " pod="openstack/ceilometer-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.146454 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4efe190c-047a-4463-9044-515816c2a7e1-run-httpd\") pod \"ceilometer-0\" (UID: \"4efe190c-047a-4463-9044-515816c2a7e1\") " pod="openstack/ceilometer-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.201560 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-t8smt"] Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.202726 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-t8smt" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.208742 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.208918 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-n8t4s" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.209025 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.221140 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-qgzqb"] Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.222426 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qgzqb" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.239186 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-t8smt"] Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.254569 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.256108 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b1fa358-6b62-4cf6-a32c-89e98f169b42-config-data\") pod \"placement-db-sync-t8smt\" (UID: \"0b1fa358-6b62-4cf6-a32c-89e98f169b42\") " pod="openstack/placement-db-sync-t8smt" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.256165 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b1fa358-6b62-4cf6-a32c-89e98f169b42-scripts\") pod \"placement-db-sync-t8smt\" (UID: \"0b1fa358-6b62-4cf6-a32c-89e98f169b42\") " pod="openstack/placement-db-sync-t8smt" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.256210 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726-ovsdbserver-nb\") pod \"dnsmasq-dns-7c8c7b95dc-bv8zz\" (UID: \"e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726\") " pod="openstack/dnsmasq-dns-7c8c7b95dc-bv8zz" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.256240 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c46098fe-52c7-4a41-9a00-d156d5bfc4be-config\") pod \"neutron-db-sync-qgzqb\" (UID: \"c46098fe-52c7-4a41-9a00-d156d5bfc4be\") " pod="openstack/neutron-db-sync-qgzqb" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.256268 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4efe190c-047a-4463-9044-515816c2a7e1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4efe190c-047a-4463-9044-515816c2a7e1\") " pod="openstack/ceilometer-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.256284 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5wcz\" (UniqueName: \"kubernetes.io/projected/0b1fa358-6b62-4cf6-a32c-89e98f169b42-kube-api-access-l5wcz\") pod \"placement-db-sync-t8smt\" (UID: \"0b1fa358-6b62-4cf6-a32c-89e98f169b42\") " pod="openstack/placement-db-sync-t8smt" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.256322 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4efe190c-047a-4463-9044-515816c2a7e1-config-data\") pod \"ceilometer-0\" (UID: \"4efe190c-047a-4463-9044-515816c2a7e1\") " pod="openstack/ceilometer-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.256344 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726-dns-svc\") pod \"dnsmasq-dns-7c8c7b95dc-bv8zz\" (UID: \"e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726\") " pod="openstack/dnsmasq-dns-7c8c7b95dc-bv8zz" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.256385 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4efe190c-047a-4463-9044-515816c2a7e1-log-httpd\") pod \"ceilometer-0\" (UID: \"4efe190c-047a-4463-9044-515816c2a7e1\") " pod="openstack/ceilometer-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.256401 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c46098fe-52c7-4a41-9a00-d156d5bfc4be-combined-ca-bundle\") pod \"neutron-db-sync-qgzqb\" (UID: \"c46098fe-52c7-4a41-9a00-d156d5bfc4be\") " pod="openstack/neutron-db-sync-qgzqb" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.256418 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8bmg\" (UniqueName: \"kubernetes.io/projected/c46098fe-52c7-4a41-9a00-d156d5bfc4be-kube-api-access-c8bmg\") pod \"neutron-db-sync-qgzqb\" (UID: \"c46098fe-52c7-4a41-9a00-d156d5bfc4be\") " pod="openstack/neutron-db-sync-qgzqb" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.256439 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7rs4\" (UniqueName: \"kubernetes.io/projected/e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726-kube-api-access-z7rs4\") pod \"dnsmasq-dns-7c8c7b95dc-bv8zz\" (UID: \"e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726\") " pod="openstack/dnsmasq-dns-7c8c7b95dc-bv8zz" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.256458 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726-dns-swift-storage-0\") pod \"dnsmasq-dns-7c8c7b95dc-bv8zz\" (UID: \"e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726\") " pod="openstack/dnsmasq-dns-7c8c7b95dc-bv8zz" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.256475 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726-ovsdbserver-sb\") pod \"dnsmasq-dns-7c8c7b95dc-bv8zz\" (UID: \"e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726\") " pod="openstack/dnsmasq-dns-7c8c7b95dc-bv8zz" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.256499 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4efe190c-047a-4463-9044-515816c2a7e1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4efe190c-047a-4463-9044-515816c2a7e1\") " pod="openstack/ceilometer-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.256568 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b1fa358-6b62-4cf6-a32c-89e98f169b42-combined-ca-bundle\") pod \"placement-db-sync-t8smt\" (UID: \"0b1fa358-6b62-4cf6-a32c-89e98f169b42\") " pod="openstack/placement-db-sync-t8smt" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.256587 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2ts2\" (UniqueName: \"kubernetes.io/projected/4efe190c-047a-4463-9044-515816c2a7e1-kube-api-access-s2ts2\") pod \"ceilometer-0\" (UID: \"4efe190c-047a-4463-9044-515816c2a7e1\") " pod="openstack/ceilometer-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.257828 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4efe190c-047a-4463-9044-515816c2a7e1-scripts\") pod \"ceilometer-0\" (UID: \"4efe190c-047a-4463-9044-515816c2a7e1\") " pod="openstack/ceilometer-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.257862 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4efe190c-047a-4463-9044-515816c2a7e1-run-httpd\") pod \"ceilometer-0\" (UID: \"4efe190c-047a-4463-9044-515816c2a7e1\") " pod="openstack/ceilometer-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.257895 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726-config\") pod \"dnsmasq-dns-7c8c7b95dc-bv8zz\" (UID: \"e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726\") " pod="openstack/dnsmasq-dns-7c8c7b95dc-bv8zz" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.257928 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b1fa358-6b62-4cf6-a32c-89e98f169b42-logs\") pod \"placement-db-sync-t8smt\" (UID: \"0b1fa358-6b62-4cf6-a32c-89e98f169b42\") " pod="openstack/placement-db-sync-t8smt" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.263104 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4efe190c-047a-4463-9044-515816c2a7e1-log-httpd\") pod \"ceilometer-0\" (UID: \"4efe190c-047a-4463-9044-515816c2a7e1\") " pod="openstack/ceilometer-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.264024 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4efe190c-047a-4463-9044-515816c2a7e1-run-httpd\") pod \"ceilometer-0\" (UID: \"4efe190c-047a-4463-9044-515816c2a7e1\") " pod="openstack/ceilometer-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.272582 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-ld8j2"] Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.273773 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-ld8j2" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.277085 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4efe190c-047a-4463-9044-515816c2a7e1-scripts\") pod \"ceilometer-0\" (UID: \"4efe190c-047a-4463-9044-515816c2a7e1\") " pod="openstack/ceilometer-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.279606 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4efe190c-047a-4463-9044-515816c2a7e1-config-data\") pod \"ceilometer-0\" (UID: \"4efe190c-047a-4463-9044-515816c2a7e1\") " pod="openstack/ceilometer-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.287080 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4efe190c-047a-4463-9044-515816c2a7e1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4efe190c-047a-4463-9044-515816c2a7e1\") " pod="openstack/ceilometer-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.288011 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.288852 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4efe190c-047a-4463-9044-515816c2a7e1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4efe190c-047a-4463-9044-515816c2a7e1\") " pod="openstack/ceilometer-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.299670 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-cjk6l" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.300227 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-kt86f" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.300349 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.309157 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2ts2\" (UniqueName: \"kubernetes.io/projected/4efe190c-047a-4463-9044-515816c2a7e1-kube-api-access-s2ts2\") pod \"ceilometer-0\" (UID: \"4efe190c-047a-4463-9044-515816c2a7e1\") " pod="openstack/ceilometer-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.316410 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.324690 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-68988b9b57-dgctl"] Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.326007 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68988b9b57-dgctl" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.353565 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qgzqb"] Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.360242 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726-config\") pod \"dnsmasq-dns-7c8c7b95dc-bv8zz\" (UID: \"e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726\") " pod="openstack/dnsmasq-dns-7c8c7b95dc-bv8zz" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.360274 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b1fa358-6b62-4cf6-a32c-89e98f169b42-logs\") pod \"placement-db-sync-t8smt\" (UID: \"0b1fa358-6b62-4cf6-a32c-89e98f169b42\") " pod="openstack/placement-db-sync-t8smt" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.360323 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b1fa358-6b62-4cf6-a32c-89e98f169b42-config-data\") pod \"placement-db-sync-t8smt\" (UID: \"0b1fa358-6b62-4cf6-a32c-89e98f169b42\") " pod="openstack/placement-db-sync-t8smt" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.360349 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/018e4a09-2b6a-4f65-999c-01584f5d9972-scripts\") pod \"horizon-68988b9b57-dgctl\" (UID: \"018e4a09-2b6a-4f65-999c-01584f5d9972\") " pod="openstack/horizon-68988b9b57-dgctl" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.360367 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrfc9\" (UniqueName: \"kubernetes.io/projected/018e4a09-2b6a-4f65-999c-01584f5d9972-kube-api-access-hrfc9\") pod \"horizon-68988b9b57-dgctl\" (UID: \"018e4a09-2b6a-4f65-999c-01584f5d9972\") " pod="openstack/horizon-68988b9b57-dgctl" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.360385 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b1fa358-6b62-4cf6-a32c-89e98f169b42-scripts\") pod \"placement-db-sync-t8smt\" (UID: \"0b1fa358-6b62-4cf6-a32c-89e98f169b42\") " pod="openstack/placement-db-sync-t8smt" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.360401 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/018e4a09-2b6a-4f65-999c-01584f5d9972-logs\") pod \"horizon-68988b9b57-dgctl\" (UID: \"018e4a09-2b6a-4f65-999c-01584f5d9972\") " pod="openstack/horizon-68988b9b57-dgctl" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.360423 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726-ovsdbserver-nb\") pod \"dnsmasq-dns-7c8c7b95dc-bv8zz\" (UID: \"e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726\") " pod="openstack/dnsmasq-dns-7c8c7b95dc-bv8zz" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.360440 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c46098fe-52c7-4a41-9a00-d156d5bfc4be-config\") pod \"neutron-db-sync-qgzqb\" (UID: \"c46098fe-52c7-4a41-9a00-d156d5bfc4be\") " pod="openstack/neutron-db-sync-qgzqb" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.360458 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5wcz\" (UniqueName: \"kubernetes.io/projected/0b1fa358-6b62-4cf6-a32c-89e98f169b42-kube-api-access-l5wcz\") pod \"placement-db-sync-t8smt\" (UID: \"0b1fa358-6b62-4cf6-a32c-89e98f169b42\") " pod="openstack/placement-db-sync-t8smt" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.364577 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.366260 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.371255 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.371435 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.371602 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.371605 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/018e4a09-2b6a-4f65-999c-01584f5d9972-horizon-secret-key\") pod \"horizon-68988b9b57-dgctl\" (UID: \"018e4a09-2b6a-4f65-999c-01584f5d9972\") " pod="openstack/horizon-68988b9b57-dgctl" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.371637 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ndhjm" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.371646 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726-dns-svc\") pod \"dnsmasq-dns-7c8c7b95dc-bv8zz\" (UID: \"e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726\") " pod="openstack/dnsmasq-dns-7c8c7b95dc-bv8zz" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.371691 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b507ad-cda3-49b8-9a29-4c10ce6c1ac4-combined-ca-bundle\") pod \"barbican-db-sync-ld8j2\" (UID: \"77b507ad-cda3-49b8-9a29-4c10ce6c1ac4\") " pod="openstack/barbican-db-sync-ld8j2" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.371710 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/018e4a09-2b6a-4f65-999c-01584f5d9972-config-data\") pod \"horizon-68988b9b57-dgctl\" (UID: \"018e4a09-2b6a-4f65-999c-01584f5d9972\") " pod="openstack/horizon-68988b9b57-dgctl" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.372536 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726-config\") pod \"dnsmasq-dns-7c8c7b95dc-bv8zz\" (UID: \"e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726\") " pod="openstack/dnsmasq-dns-7c8c7b95dc-bv8zz" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.372739 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726-dns-svc\") pod \"dnsmasq-dns-7c8c7b95dc-bv8zz\" (UID: \"e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726\") " pod="openstack/dnsmasq-dns-7c8c7b95dc-bv8zz" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.372854 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b1fa358-6b62-4cf6-a32c-89e98f169b42-logs\") pod \"placement-db-sync-t8smt\" (UID: \"0b1fa358-6b62-4cf6-a32c-89e98f169b42\") " pod="openstack/placement-db-sync-t8smt" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.373172 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z8r7\" (UniqueName: \"kubernetes.io/projected/77b507ad-cda3-49b8-9a29-4c10ce6c1ac4-kube-api-access-4z8r7\") pod \"barbican-db-sync-ld8j2\" (UID: \"77b507ad-cda3-49b8-9a29-4c10ce6c1ac4\") " pod="openstack/barbican-db-sync-ld8j2" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.373229 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c46098fe-52c7-4a41-9a00-d156d5bfc4be-combined-ca-bundle\") pod \"neutron-db-sync-qgzqb\" (UID: \"c46098fe-52c7-4a41-9a00-d156d5bfc4be\") " pod="openstack/neutron-db-sync-qgzqb" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.373250 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8bmg\" (UniqueName: \"kubernetes.io/projected/c46098fe-52c7-4a41-9a00-d156d5bfc4be-kube-api-access-c8bmg\") pod \"neutron-db-sync-qgzqb\" (UID: \"c46098fe-52c7-4a41-9a00-d156d5bfc4be\") " pod="openstack/neutron-db-sync-qgzqb" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.373272 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7rs4\" (UniqueName: \"kubernetes.io/projected/e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726-kube-api-access-z7rs4\") pod \"dnsmasq-dns-7c8c7b95dc-bv8zz\" (UID: \"e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726\") " pod="openstack/dnsmasq-dns-7c8c7b95dc-bv8zz" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.373292 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726-dns-swift-storage-0\") pod \"dnsmasq-dns-7c8c7b95dc-bv8zz\" (UID: \"e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726\") " pod="openstack/dnsmasq-dns-7c8c7b95dc-bv8zz" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.373312 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726-ovsdbserver-sb\") pod \"dnsmasq-dns-7c8c7b95dc-bv8zz\" (UID: \"e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726\") " pod="openstack/dnsmasq-dns-7c8c7b95dc-bv8zz" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.373332 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/77b507ad-cda3-49b8-9a29-4c10ce6c1ac4-db-sync-config-data\") pod \"barbican-db-sync-ld8j2\" (UID: \"77b507ad-cda3-49b8-9a29-4c10ce6c1ac4\") " pod="openstack/barbican-db-sync-ld8j2" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.373363 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b1fa358-6b62-4cf6-a32c-89e98f169b42-combined-ca-bundle\") pod \"placement-db-sync-t8smt\" (UID: \"0b1fa358-6b62-4cf6-a32c-89e98f169b42\") " pod="openstack/placement-db-sync-t8smt" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.375531 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726-ovsdbserver-nb\") pod \"dnsmasq-dns-7c8c7b95dc-bv8zz\" (UID: \"e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726\") " pod="openstack/dnsmasq-dns-7c8c7b95dc-bv8zz" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.376281 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726-dns-swift-storage-0\") pod \"dnsmasq-dns-7c8c7b95dc-bv8zz\" (UID: \"e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726\") " pod="openstack/dnsmasq-dns-7c8c7b95dc-bv8zz" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.381876 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c46098fe-52c7-4a41-9a00-d156d5bfc4be-config\") pod \"neutron-db-sync-qgzqb\" (UID: \"c46098fe-52c7-4a41-9a00-d156d5bfc4be\") " pod="openstack/neutron-db-sync-qgzqb" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.383167 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xgsxk" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.389758 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726-ovsdbserver-sb\") pod \"dnsmasq-dns-7c8c7b95dc-bv8zz\" (UID: \"e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726\") " pod="openstack/dnsmasq-dns-7c8c7b95dc-bv8zz" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.393625 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.406875 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7rs4\" (UniqueName: \"kubernetes.io/projected/e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726-kube-api-access-z7rs4\") pod \"dnsmasq-dns-7c8c7b95dc-bv8zz\" (UID: \"e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726\") " pod="openstack/dnsmasq-dns-7c8c7b95dc-bv8zz" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.413670 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b1fa358-6b62-4cf6-a32c-89e98f169b42-config-data\") pod \"placement-db-sync-t8smt\" (UID: \"0b1fa358-6b62-4cf6-a32c-89e98f169b42\") " pod="openstack/placement-db-sync-t8smt" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.413780 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68988b9b57-dgctl"] Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.417045 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c46098fe-52c7-4a41-9a00-d156d5bfc4be-combined-ca-bundle\") pod \"neutron-db-sync-qgzqb\" (UID: \"c46098fe-52c7-4a41-9a00-d156d5bfc4be\") " pod="openstack/neutron-db-sync-qgzqb" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.425684 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b1fa358-6b62-4cf6-a32c-89e98f169b42-scripts\") pod \"placement-db-sync-t8smt\" (UID: \"0b1fa358-6b62-4cf6-a32c-89e98f169b42\") " pod="openstack/placement-db-sync-t8smt" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.426121 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b1fa358-6b62-4cf6-a32c-89e98f169b42-combined-ca-bundle\") pod \"placement-db-sync-t8smt\" (UID: \"0b1fa358-6b62-4cf6-a32c-89e98f169b42\") " pod="openstack/placement-db-sync-t8smt" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.435406 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8bmg\" (UniqueName: \"kubernetes.io/projected/c46098fe-52c7-4a41-9a00-d156d5bfc4be-kube-api-access-c8bmg\") pod \"neutron-db-sync-qgzqb\" (UID: \"c46098fe-52c7-4a41-9a00-d156d5bfc4be\") " pod="openstack/neutron-db-sync-qgzqb" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.441581 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c8c7b95dc-bv8zz" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.447993 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-ld8j2"] Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.449786 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5wcz\" (UniqueName: \"kubernetes.io/projected/0b1fa358-6b62-4cf6-a32c-89e98f169b42-kube-api-access-l5wcz\") pod \"placement-db-sync-t8smt\" (UID: \"0b1fa358-6b62-4cf6-a32c-89e98f169b42\") " pod="openstack/placement-db-sync-t8smt" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.474720 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/018e4a09-2b6a-4f65-999c-01584f5d9972-scripts\") pod \"horizon-68988b9b57-dgctl\" (UID: \"018e4a09-2b6a-4f65-999c-01584f5d9972\") " pod="openstack/horizon-68988b9b57-dgctl" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.474761 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrfc9\" (UniqueName: \"kubernetes.io/projected/018e4a09-2b6a-4f65-999c-01584f5d9972-kube-api-access-hrfc9\") pod \"horizon-68988b9b57-dgctl\" (UID: \"018e4a09-2b6a-4f65-999c-01584f5d9972\") " pod="openstack/horizon-68988b9b57-dgctl" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.474789 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/018e4a09-2b6a-4f65-999c-01584f5d9972-logs\") pod \"horizon-68988b9b57-dgctl\" (UID: \"018e4a09-2b6a-4f65-999c-01584f5d9972\") " pod="openstack/horizon-68988b9b57-dgctl" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.474850 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/018e4a09-2b6a-4f65-999c-01584f5d9972-horizon-secret-key\") pod \"horizon-68988b9b57-dgctl\" (UID: \"018e4a09-2b6a-4f65-999c-01584f5d9972\") " pod="openstack/horizon-68988b9b57-dgctl" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.474885 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b507ad-cda3-49b8-9a29-4c10ce6c1ac4-combined-ca-bundle\") pod \"barbican-db-sync-ld8j2\" (UID: \"77b507ad-cda3-49b8-9a29-4c10ce6c1ac4\") " pod="openstack/barbican-db-sync-ld8j2" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.474899 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/018e4a09-2b6a-4f65-999c-01584f5d9972-config-data\") pod \"horizon-68988b9b57-dgctl\" (UID: \"018e4a09-2b6a-4f65-999c-01584f5d9972\") " pod="openstack/horizon-68988b9b57-dgctl" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.474918 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z8r7\" (UniqueName: \"kubernetes.io/projected/77b507ad-cda3-49b8-9a29-4c10ce6c1ac4-kube-api-access-4z8r7\") pod \"barbican-db-sync-ld8j2\" (UID: \"77b507ad-cda3-49b8-9a29-4c10ce6c1ac4\") " pod="openstack/barbican-db-sync-ld8j2" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.474983 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/77b507ad-cda3-49b8-9a29-4c10ce6c1ac4-db-sync-config-data\") pod \"barbican-db-sync-ld8j2\" (UID: \"77b507ad-cda3-49b8-9a29-4c10ce6c1ac4\") " pod="openstack/barbican-db-sync-ld8j2" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.477675 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/77b507ad-cda3-49b8-9a29-4c10ce6c1ac4-db-sync-config-data\") pod \"barbican-db-sync-ld8j2\" (UID: \"77b507ad-cda3-49b8-9a29-4c10ce6c1ac4\") " pod="openstack/barbican-db-sync-ld8j2" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.479389 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/018e4a09-2b6a-4f65-999c-01584f5d9972-scripts\") pod \"horizon-68988b9b57-dgctl\" (UID: \"018e4a09-2b6a-4f65-999c-01584f5d9972\") " pod="openstack/horizon-68988b9b57-dgctl" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.484842 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/018e4a09-2b6a-4f65-999c-01584f5d9972-logs\") pod \"horizon-68988b9b57-dgctl\" (UID: \"018e4a09-2b6a-4f65-999c-01584f5d9972\") " pod="openstack/horizon-68988b9b57-dgctl" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.485740 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/018e4a09-2b6a-4f65-999c-01584f5d9972-config-data\") pod \"horizon-68988b9b57-dgctl\" (UID: \"018e4a09-2b6a-4f65-999c-01584f5d9972\") " pod="openstack/horizon-68988b9b57-dgctl" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.491623 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/018e4a09-2b6a-4f65-999c-01584f5d9972-horizon-secret-key\") pod \"horizon-68988b9b57-dgctl\" (UID: \"018e4a09-2b6a-4f65-999c-01584f5d9972\") " pod="openstack/horizon-68988b9b57-dgctl" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.491689 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.493220 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.504263 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b507ad-cda3-49b8-9a29-4c10ce6c1ac4-combined-ca-bundle\") pod \"barbican-db-sync-ld8j2\" (UID: \"77b507ad-cda3-49b8-9a29-4c10ce6c1ac4\") " pod="openstack/barbican-db-sync-ld8j2" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.504321 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.515045 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.515263 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.521833 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z8r7\" (UniqueName: \"kubernetes.io/projected/77b507ad-cda3-49b8-9a29-4c10ce6c1ac4-kube-api-access-4z8r7\") pod \"barbican-db-sync-ld8j2\" (UID: \"77b507ad-cda3-49b8-9a29-4c10ce6c1ac4\") " pod="openstack/barbican-db-sync-ld8j2" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.533022 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrfc9\" (UniqueName: \"kubernetes.io/projected/018e4a09-2b6a-4f65-999c-01584f5d9972-kube-api-access-hrfc9\") pod \"horizon-68988b9b57-dgctl\" (UID: \"018e4a09-2b6a-4f65-999c-01584f5d9972\") " pod="openstack/horizon-68988b9b57-dgctl" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.622177 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\") " pod="openstack/glance-default-external-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.622349 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\") " pod="openstack/glance-default-external-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.622788 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-config-data\") pod \"glance-default-external-api-0\" (UID: \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\") " pod="openstack/glance-default-external-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.623107 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-scripts\") pod \"glance-default-external-api-0\" (UID: \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\") " pod="openstack/glance-default-external-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.624246 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-t8smt" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.624567 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qgzqb" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.630810 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\") " pod="openstack/glance-default-external-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.631638 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\") " pod="openstack/glance-default-external-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.631785 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfhlp\" (UniqueName: \"kubernetes.io/projected/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-kube-api-access-pfhlp\") pod \"glance-default-external-api-0\" (UID: \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\") " pod="openstack/glance-default-external-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.631919 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-logs\") pod \"glance-default-external-api-0\" (UID: \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\") " pod="openstack/glance-default-external-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.633330 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-ld8j2" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.660543 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68988b9b57-dgctl" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.741490 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.741567 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-scripts\") pod \"glance-default-external-api-0\" (UID: \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\") " pod="openstack/glance-default-external-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.741669 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.741702 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\") " pod="openstack/glance-default-external-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.741727 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\") " pod="openstack/glance-default-external-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.741767 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.741810 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfhlp\" (UniqueName: \"kubernetes.io/projected/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-kube-api-access-pfhlp\") pod \"glance-default-external-api-0\" (UID: \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\") " pod="openstack/glance-default-external-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.741834 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.741859 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.741909 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-logs\") pod \"glance-default-external-api-0\" (UID: \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\") " pod="openstack/glance-default-external-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.741931 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k728f\" (UniqueName: \"kubernetes.io/projected/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-kube-api-access-k728f\") pod \"glance-default-internal-api-0\" (UID: \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.741958 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\") " pod="openstack/glance-default-external-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.741988 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\") " pod="openstack/glance-default-external-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.742012 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.742063 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-config-data\") pod \"glance-default-external-api-0\" (UID: \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\") " pod="openstack/glance-default-external-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.742090 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-logs\") pod \"glance-default-internal-api-0\" (UID: \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.749763 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\") " pod="openstack/glance-default-external-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.768743 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-logs\") pod \"glance-default-external-api-0\" (UID: \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\") " pod="openstack/glance-default-external-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.768780 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\") " pod="openstack/glance-default-external-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.769006 4520 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.785734 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-scripts\") pod \"glance-default-external-api-0\" (UID: \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\") " pod="openstack/glance-default-external-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.801216 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-config-data\") pod \"glance-default-external-api-0\" (UID: \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\") " pod="openstack/glance-default-external-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.818861 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\") " pod="openstack/glance-default-external-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.819138 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\") " pod="openstack/glance-default-external-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.819662 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfhlp\" (UniqueName: \"kubernetes.io/projected/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-kube-api-access-pfhlp\") pod \"glance-default-external-api-0\" (UID: \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\") " pod="openstack/glance-default-external-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.847689 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.847759 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.847783 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.847874 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k728f\" (UniqueName: \"kubernetes.io/projected/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-kube-api-access-k728f\") pod \"glance-default-internal-api-0\" (UID: \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.847919 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.847997 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-logs\") pod \"glance-default-internal-api-0\" (UID: \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.848021 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.848033 4520 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.859931 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.861053 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-logs\") pod \"glance-default-internal-api-0\" (UID: \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.864108 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.865540 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.866007 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.894177 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.901626 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.908027 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k728f\" (UniqueName: \"kubernetes.io/projected/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-kube-api-access-k728f\") pod \"glance-default-internal-api-0\" (UID: \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:59:55 crc kubenswrapper[4520]: I0130 06:59:55.954709 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:59:56 crc kubenswrapper[4520]: I0130 06:59:56.032923 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 06:59:56 crc kubenswrapper[4520]: I0130 06:59:56.230012 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 06:59:56 crc kubenswrapper[4520]: I0130 06:59:56.298327 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-g6448"] Jan 30 06:59:56 crc kubenswrapper[4520]: I0130 06:59:56.323287 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-866bd8c4c5-qzvrc"] Jan 30 06:59:56 crc kubenswrapper[4520]: I0130 06:59:56.405240 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-qndsg"] Jan 30 06:59:56 crc kubenswrapper[4520]: I0130 06:59:56.418922 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7ccf6f8c8c-g5kgh"] Jan 30 06:59:56 crc kubenswrapper[4520]: I0130 06:59:56.899403 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-t8smt"] Jan 30 06:59:56 crc kubenswrapper[4520]: I0130 06:59:56.926037 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 06:59:56 crc kubenswrapper[4520]: W0130 06:59:56.957776 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode83f5d4e_e09c_49c7_b5a1_e7ec5b0da726.slice/crio-219da46f145d4ea7a0003fdc397076632f5054269f9051726645d4eb58941fda WatchSource:0}: Error finding container 219da46f145d4ea7a0003fdc397076632f5054269f9051726645d4eb58941fda: Status 404 returned error can't find the container with id 219da46f145d4ea7a0003fdc397076632f5054269f9051726645d4eb58941fda Jan 30 06:59:56 crc kubenswrapper[4520]: W0130 06:59:56.962778 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod018e4a09_2b6a_4f65_999c_01584f5d9972.slice/crio-daae3841408735311ff0308cbbc10aa51e642639ba9d81256030f983c502819a WatchSource:0}: Error finding container daae3841408735311ff0308cbbc10aa51e642639ba9d81256030f983c502819a: Status 404 returned error can't find the container with id daae3841408735311ff0308cbbc10aa51e642639ba9d81256030f983c502819a Jan 30 06:59:56 crc kubenswrapper[4520]: I0130 06:59:56.967815 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qgzqb"] Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.016548 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-ld8j2"] Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.035534 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-xgsxk"] Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.039040 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c8c7b95dc-bv8zz"] Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.045146 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68988b9b57-dgctl"] Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.157990 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.159219 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-ld8j2" event={"ID":"77b507ad-cda3-49b8-9a29-4c10ce6c1ac4","Type":"ContainerStarted","Data":"fc68f1711e690b7b0fb339f9e6aec250a87486d49ac6a686216bb5752eac0d5e"} Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.165612 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4efe190c-047a-4463-9044-515816c2a7e1","Type":"ContainerStarted","Data":"962e08117bef72825961dd4e0f0e2d8765d7ac5606e348815111c653963b0c4f"} Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.167900 4520 generic.go:334] "Generic (PLEG): container finished" podID="3444bf87-4258-4f03-81fb-2a6d8af3ccc6" containerID="4f93638abd6883e01921a384ecefe16526a1860ca1738136ab62ff950ca17b23" exitCode=0 Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.167959 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-866bd8c4c5-qzvrc" event={"ID":"3444bf87-4258-4f03-81fb-2a6d8af3ccc6","Type":"ContainerDied","Data":"4f93638abd6883e01921a384ecefe16526a1860ca1738136ab62ff950ca17b23"} Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.167984 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-866bd8c4c5-qzvrc" event={"ID":"3444bf87-4258-4f03-81fb-2a6d8af3ccc6","Type":"ContainerStarted","Data":"f432e6439fd03c315718e676ac6c847b1c0b159062ba151dc3b5f915df7e8492"} Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.172426 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68988b9b57-dgctl" event={"ID":"018e4a09-2b6a-4f65-999c-01584f5d9972","Type":"ContainerStarted","Data":"daae3841408735311ff0308cbbc10aa51e642639ba9d81256030f983c502819a"} Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.173383 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-t8smt" event={"ID":"0b1fa358-6b62-4cf6-a32c-89e98f169b42","Type":"ContainerStarted","Data":"a086183915d986c150d57ab719b79f1867b24718210424dc9ff5debc826d4844"} Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.176244 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qgzqb" event={"ID":"c46098fe-52c7-4a41-9a00-d156d5bfc4be","Type":"ContainerStarted","Data":"7fe24fa56e9786da0c377b6f0d30798538c1f4d245eb45985ca2695195a3b537"} Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.194721 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xgsxk" event={"ID":"fc2063bc-3a1e-4e9f-badc-299e256a2f3c","Type":"ContainerStarted","Data":"54132e99aee4f3da29ffe09eb2bd79bfdd3f16b50756229842c09cee4ab334fc"} Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.204154 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-qndsg" event={"ID":"1771d5c5-4904-435a-81ac-80eaaf23bc68","Type":"ContainerStarted","Data":"50b703e09a192b0738dc936337e63cc6423f80880832bc5fa0432c923ace9add"} Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.222980 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c8c7b95dc-bv8zz" event={"ID":"e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726","Type":"ContainerStarted","Data":"219da46f145d4ea7a0003fdc397076632f5054269f9051726645d4eb58941fda"} Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.233184 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g6448" event={"ID":"df6d9500-f0bf-4aff-a6d9-86fcdc982d6c","Type":"ContainerStarted","Data":"ec8500b8477fefb4a8c65e86ad568dc4618d89be58fa29cf0eda07e8632c2b32"} Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.233212 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g6448" event={"ID":"df6d9500-f0bf-4aff-a6d9-86fcdc982d6c","Type":"ContainerStarted","Data":"abf4148b55419adfa815923eb33f9cc40f8a6c067fb55cdacbe97e3bbd3abee0"} Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.236950 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7ccf6f8c8c-g5kgh" event={"ID":"a9ba792b-9c9d-4e3a-ae77-22c24f473037","Type":"ContainerStarted","Data":"0efd3b36c42c7b8c01657e9c6f218aa697500f5511ed2a4ec1acbb6522467e0c"} Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.271574 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-g6448" podStartSLOduration=3.271564273 podStartE2EDuration="3.271564273s" podCreationTimestamp="2026-01-30 06:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:59:57.270053974 +0000 UTC m=+910.898406155" watchObservedRunningTime="2026-01-30 06:59:57.271564273 +0000 UTC m=+910.899916454" Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.359693 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.436904 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7ccf6f8c8c-g5kgh"] Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.474057 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-759c7d779-ckntp"] Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.475371 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-759c7d779-ckntp" Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.504681 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.520901 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.532571 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-759c7d779-ckntp"] Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.663200 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6mmr\" (UniqueName: \"kubernetes.io/projected/74b5dc84-a3f3-4bd1-8f9d-7165de599a6f-kube-api-access-n6mmr\") pod \"horizon-759c7d779-ckntp\" (UID: \"74b5dc84-a3f3-4bd1-8f9d-7165de599a6f\") " pod="openstack/horizon-759c7d779-ckntp" Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.663263 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74b5dc84-a3f3-4bd1-8f9d-7165de599a6f-config-data\") pod \"horizon-759c7d779-ckntp\" (UID: \"74b5dc84-a3f3-4bd1-8f9d-7165de599a6f\") " pod="openstack/horizon-759c7d779-ckntp" Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.663357 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74b5dc84-a3f3-4bd1-8f9d-7165de599a6f-scripts\") pod \"horizon-759c7d779-ckntp\" (UID: \"74b5dc84-a3f3-4bd1-8f9d-7165de599a6f\") " pod="openstack/horizon-759c7d779-ckntp" Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.663382 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74b5dc84-a3f3-4bd1-8f9d-7165de599a6f-horizon-secret-key\") pod \"horizon-759c7d779-ckntp\" (UID: \"74b5dc84-a3f3-4bd1-8f9d-7165de599a6f\") " pod="openstack/horizon-759c7d779-ckntp" Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.663430 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74b5dc84-a3f3-4bd1-8f9d-7165de599a6f-logs\") pod \"horizon-759c7d779-ckntp\" (UID: \"74b5dc84-a3f3-4bd1-8f9d-7165de599a6f\") " pod="openstack/horizon-759c7d779-ckntp" Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.724306 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.768847 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74b5dc84-a3f3-4bd1-8f9d-7165de599a6f-scripts\") pod \"horizon-759c7d779-ckntp\" (UID: \"74b5dc84-a3f3-4bd1-8f9d-7165de599a6f\") " pod="openstack/horizon-759c7d779-ckntp" Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.769021 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74b5dc84-a3f3-4bd1-8f9d-7165de599a6f-horizon-secret-key\") pod \"horizon-759c7d779-ckntp\" (UID: \"74b5dc84-a3f3-4bd1-8f9d-7165de599a6f\") " pod="openstack/horizon-759c7d779-ckntp" Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.769156 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74b5dc84-a3f3-4bd1-8f9d-7165de599a6f-logs\") pod \"horizon-759c7d779-ckntp\" (UID: \"74b5dc84-a3f3-4bd1-8f9d-7165de599a6f\") " pod="openstack/horizon-759c7d779-ckntp" Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.769426 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6mmr\" (UniqueName: \"kubernetes.io/projected/74b5dc84-a3f3-4bd1-8f9d-7165de599a6f-kube-api-access-n6mmr\") pod \"horizon-759c7d779-ckntp\" (UID: \"74b5dc84-a3f3-4bd1-8f9d-7165de599a6f\") " pod="openstack/horizon-759c7d779-ckntp" Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.769589 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74b5dc84-a3f3-4bd1-8f9d-7165de599a6f-config-data\") pod \"horizon-759c7d779-ckntp\" (UID: \"74b5dc84-a3f3-4bd1-8f9d-7165de599a6f\") " pod="openstack/horizon-759c7d779-ckntp" Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.770030 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74b5dc84-a3f3-4bd1-8f9d-7165de599a6f-logs\") pod \"horizon-759c7d779-ckntp\" (UID: \"74b5dc84-a3f3-4bd1-8f9d-7165de599a6f\") " pod="openstack/horizon-759c7d779-ckntp" Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.770504 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74b5dc84-a3f3-4bd1-8f9d-7165de599a6f-scripts\") pod \"horizon-759c7d779-ckntp\" (UID: \"74b5dc84-a3f3-4bd1-8f9d-7165de599a6f\") " pod="openstack/horizon-759c7d779-ckntp" Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.777222 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74b5dc84-a3f3-4bd1-8f9d-7165de599a6f-config-data\") pod \"horizon-759c7d779-ckntp\" (UID: \"74b5dc84-a3f3-4bd1-8f9d-7165de599a6f\") " pod="openstack/horizon-759c7d779-ckntp" Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.811037 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74b5dc84-a3f3-4bd1-8f9d-7165de599a6f-horizon-secret-key\") pod \"horizon-759c7d779-ckntp\" (UID: \"74b5dc84-a3f3-4bd1-8f9d-7165de599a6f\") " pod="openstack/horizon-759c7d779-ckntp" Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.822150 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6mmr\" (UniqueName: \"kubernetes.io/projected/74b5dc84-a3f3-4bd1-8f9d-7165de599a6f-kube-api-access-n6mmr\") pod \"horizon-759c7d779-ckntp\" (UID: \"74b5dc84-a3f3-4bd1-8f9d-7165de599a6f\") " pod="openstack/horizon-759c7d779-ckntp" Jan 30 06:59:57 crc kubenswrapper[4520]: I0130 06:59:57.984927 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-866bd8c4c5-qzvrc" Jan 30 06:59:58 crc kubenswrapper[4520]: I0130 06:59:58.091296 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3444bf87-4258-4f03-81fb-2a6d8af3ccc6-config\") pod \"3444bf87-4258-4f03-81fb-2a6d8af3ccc6\" (UID: \"3444bf87-4258-4f03-81fb-2a6d8af3ccc6\") " Jan 30 06:59:58 crc kubenswrapper[4520]: I0130 06:59:58.091424 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3444bf87-4258-4f03-81fb-2a6d8af3ccc6-ovsdbserver-nb\") pod \"3444bf87-4258-4f03-81fb-2a6d8af3ccc6\" (UID: \"3444bf87-4258-4f03-81fb-2a6d8af3ccc6\") " Jan 30 06:59:58 crc kubenswrapper[4520]: I0130 06:59:58.091505 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3444bf87-4258-4f03-81fb-2a6d8af3ccc6-ovsdbserver-sb\") pod \"3444bf87-4258-4f03-81fb-2a6d8af3ccc6\" (UID: \"3444bf87-4258-4f03-81fb-2a6d8af3ccc6\") " Jan 30 06:59:58 crc kubenswrapper[4520]: I0130 06:59:58.091551 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3444bf87-4258-4f03-81fb-2a6d8af3ccc6-dns-swift-storage-0\") pod \"3444bf87-4258-4f03-81fb-2a6d8af3ccc6\" (UID: \"3444bf87-4258-4f03-81fb-2a6d8af3ccc6\") " Jan 30 06:59:58 crc kubenswrapper[4520]: I0130 06:59:58.091604 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57f9v\" (UniqueName: \"kubernetes.io/projected/3444bf87-4258-4f03-81fb-2a6d8af3ccc6-kube-api-access-57f9v\") pod \"3444bf87-4258-4f03-81fb-2a6d8af3ccc6\" (UID: \"3444bf87-4258-4f03-81fb-2a6d8af3ccc6\") " Jan 30 06:59:58 crc kubenswrapper[4520]: I0130 06:59:58.091735 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3444bf87-4258-4f03-81fb-2a6d8af3ccc6-dns-svc\") pod \"3444bf87-4258-4f03-81fb-2a6d8af3ccc6\" (UID: \"3444bf87-4258-4f03-81fb-2a6d8af3ccc6\") " Jan 30 06:59:58 crc kubenswrapper[4520]: I0130 06:59:58.105675 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-759c7d779-ckntp" Jan 30 06:59:58 crc kubenswrapper[4520]: I0130 06:59:58.109481 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3444bf87-4258-4f03-81fb-2a6d8af3ccc6-config" (OuterVolumeSpecName: "config") pod "3444bf87-4258-4f03-81fb-2a6d8af3ccc6" (UID: "3444bf87-4258-4f03-81fb-2a6d8af3ccc6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:59:58 crc kubenswrapper[4520]: I0130 06:59:58.119798 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3444bf87-4258-4f03-81fb-2a6d8af3ccc6-kube-api-access-57f9v" (OuterVolumeSpecName: "kube-api-access-57f9v") pod "3444bf87-4258-4f03-81fb-2a6d8af3ccc6" (UID: "3444bf87-4258-4f03-81fb-2a6d8af3ccc6"). InnerVolumeSpecName "kube-api-access-57f9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:59:58 crc kubenswrapper[4520]: I0130 06:59:58.127118 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3444bf87-4258-4f03-81fb-2a6d8af3ccc6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3444bf87-4258-4f03-81fb-2a6d8af3ccc6" (UID: "3444bf87-4258-4f03-81fb-2a6d8af3ccc6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:59:58 crc kubenswrapper[4520]: I0130 06:59:58.134952 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3444bf87-4258-4f03-81fb-2a6d8af3ccc6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3444bf87-4258-4f03-81fb-2a6d8af3ccc6" (UID: "3444bf87-4258-4f03-81fb-2a6d8af3ccc6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:59:58 crc kubenswrapper[4520]: I0130 06:59:58.135410 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3444bf87-4258-4f03-81fb-2a6d8af3ccc6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3444bf87-4258-4f03-81fb-2a6d8af3ccc6" (UID: "3444bf87-4258-4f03-81fb-2a6d8af3ccc6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:59:58 crc kubenswrapper[4520]: I0130 06:59:58.151000 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3444bf87-4258-4f03-81fb-2a6d8af3ccc6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3444bf87-4258-4f03-81fb-2a6d8af3ccc6" (UID: "3444bf87-4258-4f03-81fb-2a6d8af3ccc6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:59:58 crc kubenswrapper[4520]: I0130 06:59:58.200277 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57f9v\" (UniqueName: \"kubernetes.io/projected/3444bf87-4258-4f03-81fb-2a6d8af3ccc6-kube-api-access-57f9v\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:58 crc kubenswrapper[4520]: I0130 06:59:58.200309 4520 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3444bf87-4258-4f03-81fb-2a6d8af3ccc6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:58 crc kubenswrapper[4520]: I0130 06:59:58.200320 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3444bf87-4258-4f03-81fb-2a6d8af3ccc6-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:58 crc kubenswrapper[4520]: I0130 06:59:58.200332 4520 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3444bf87-4258-4f03-81fb-2a6d8af3ccc6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:58 crc kubenswrapper[4520]: I0130 06:59:58.200340 4520 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3444bf87-4258-4f03-81fb-2a6d8af3ccc6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:58 crc kubenswrapper[4520]: I0130 06:59:58.200348 4520 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3444bf87-4258-4f03-81fb-2a6d8af3ccc6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 06:59:58 crc kubenswrapper[4520]: I0130 06:59:58.253964 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"05c54a44-a9e7-4d9f-9758-a40dd10bf72e","Type":"ContainerStarted","Data":"952ec1559286a59de52ffcb5e7a634e21f1371ed6dd4ab90f8921c9cb3f282ef"} Jan 30 06:59:58 crc kubenswrapper[4520]: I0130 06:59:58.257717 4520 generic.go:334] "Generic (PLEG): container finished" podID="e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726" containerID="664fed1e55e0737a08606e2132270298741a8c14731b75db8e2505debbb55860" exitCode=0 Jan 30 06:59:58 crc kubenswrapper[4520]: I0130 06:59:58.257769 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c8c7b95dc-bv8zz" event={"ID":"e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726","Type":"ContainerDied","Data":"664fed1e55e0737a08606e2132270298741a8c14731b75db8e2505debbb55860"} Jan 30 06:59:58 crc kubenswrapper[4520]: I0130 06:59:58.292405 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-866bd8c4c5-qzvrc" event={"ID":"3444bf87-4258-4f03-81fb-2a6d8af3ccc6","Type":"ContainerDied","Data":"f432e6439fd03c315718e676ac6c847b1c0b159062ba151dc3b5f915df7e8492"} Jan 30 06:59:58 crc kubenswrapper[4520]: I0130 06:59:58.292491 4520 scope.go:117] "RemoveContainer" containerID="4f93638abd6883e01921a384ecefe16526a1860ca1738136ab62ff950ca17b23" Jan 30 06:59:58 crc kubenswrapper[4520]: I0130 06:59:58.292729 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-866bd8c4c5-qzvrc" Jan 30 06:59:58 crc kubenswrapper[4520]: I0130 06:59:58.337133 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6dea2a52-9e2f-4a08-a6ca-f168ed7379db","Type":"ContainerStarted","Data":"6165fa5fa12c28ce35801b2411844e83cdf4b78703a2bb2598e32af3803796d2"} Jan 30 06:59:58 crc kubenswrapper[4520]: I0130 06:59:58.352803 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qgzqb" event={"ID":"c46098fe-52c7-4a41-9a00-d156d5bfc4be","Type":"ContainerStarted","Data":"5be57067c7407f6aa6d3be338b06ad9bc6ef28560cd5e542dacb862e6d6dba31"} Jan 30 06:59:58 crc kubenswrapper[4520]: I0130 06:59:58.397490 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-qgzqb" podStartSLOduration=4.39746874 podStartE2EDuration="4.39746874s" podCreationTimestamp="2026-01-30 06:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:59:58.380854514 +0000 UTC m=+912.009206695" watchObservedRunningTime="2026-01-30 06:59:58.39746874 +0000 UTC m=+912.025820921" Jan 30 06:59:58 crc kubenswrapper[4520]: I0130 06:59:58.618557 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-866bd8c4c5-qzvrc"] Jan 30 06:59:58 crc kubenswrapper[4520]: I0130 06:59:58.625268 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-866bd8c4c5-qzvrc"] Jan 30 06:59:58 crc kubenswrapper[4520]: I0130 06:59:58.885689 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3444bf87-4258-4f03-81fb-2a6d8af3ccc6" path="/var/lib/kubelet/pods/3444bf87-4258-4f03-81fb-2a6d8af3ccc6/volumes" Jan 30 06:59:59 crc kubenswrapper[4520]: I0130 06:59:59.256236 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-759c7d779-ckntp"] Jan 30 06:59:59 crc kubenswrapper[4520]: I0130 06:59:59.417673 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-759c7d779-ckntp" event={"ID":"74b5dc84-a3f3-4bd1-8f9d-7165de599a6f","Type":"ContainerStarted","Data":"d5639323e21232fff9fa77206d3c2b6c26228e5f46d3830915f0baf8700b6b6d"} Jan 30 06:59:59 crc kubenswrapper[4520]: I0130 06:59:59.433216 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"05c54a44-a9e7-4d9f-9758-a40dd10bf72e","Type":"ContainerStarted","Data":"04958fa502407b7065a4b6e5666ab50997cb61494ffc1982adb6694391d2032d"} Jan 30 06:59:59 crc kubenswrapper[4520]: I0130 06:59:59.442676 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c8c7b95dc-bv8zz" event={"ID":"e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726","Type":"ContainerStarted","Data":"a90541b3e8d03f9618d7f923b5e099ccb396941d7cdd4949571451bdb9a20917"} Jan 30 06:59:59 crc kubenswrapper[4520]: I0130 06:59:59.442850 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c8c7b95dc-bv8zz" Jan 30 06:59:59 crc kubenswrapper[4520]: I0130 06:59:59.473617 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c8c7b95dc-bv8zz" podStartSLOduration=5.473603125 podStartE2EDuration="5.473603125s" podCreationTimestamp="2026-01-30 06:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:59:59.465821696 +0000 UTC m=+913.094173877" watchObservedRunningTime="2026-01-30 06:59:59.473603125 +0000 UTC m=+913.101955307" Jan 30 07:00:00 crc kubenswrapper[4520]: I0130 07:00:00.143629 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495940-s889s"] Jan 30 07:00:00 crc kubenswrapper[4520]: E0130 07:00:00.144367 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3444bf87-4258-4f03-81fb-2a6d8af3ccc6" containerName="init" Jan 30 07:00:00 crc kubenswrapper[4520]: I0130 07:00:00.144386 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="3444bf87-4258-4f03-81fb-2a6d8af3ccc6" containerName="init" Jan 30 07:00:00 crc kubenswrapper[4520]: I0130 07:00:00.144574 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="3444bf87-4258-4f03-81fb-2a6d8af3ccc6" containerName="init" Jan 30 07:00:00 crc kubenswrapper[4520]: I0130 07:00:00.145174 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495940-s889s" Jan 30 07:00:00 crc kubenswrapper[4520]: I0130 07:00:00.150641 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 07:00:00 crc kubenswrapper[4520]: I0130 07:00:00.153889 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 07:00:00 crc kubenswrapper[4520]: I0130 07:00:00.183504 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495940-s889s"] Jan 30 07:00:00 crc kubenswrapper[4520]: I0130 07:00:00.246456 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/639c6c1f-c3ef-44ad-bfba-7aa257d311bf-secret-volume\") pod \"collect-profiles-29495940-s889s\" (UID: \"639c6c1f-c3ef-44ad-bfba-7aa257d311bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495940-s889s" Jan 30 07:00:00 crc kubenswrapper[4520]: I0130 07:00:00.246510 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5src\" (UniqueName: \"kubernetes.io/projected/639c6c1f-c3ef-44ad-bfba-7aa257d311bf-kube-api-access-z5src\") pod \"collect-profiles-29495940-s889s\" (UID: \"639c6c1f-c3ef-44ad-bfba-7aa257d311bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495940-s889s" Jan 30 07:00:00 crc kubenswrapper[4520]: I0130 07:00:00.246603 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/639c6c1f-c3ef-44ad-bfba-7aa257d311bf-config-volume\") pod \"collect-profiles-29495940-s889s\" (UID: \"639c6c1f-c3ef-44ad-bfba-7aa257d311bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495940-s889s" Jan 30 07:00:00 crc kubenswrapper[4520]: I0130 07:00:00.359152 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/639c6c1f-c3ef-44ad-bfba-7aa257d311bf-secret-volume\") pod \"collect-profiles-29495940-s889s\" (UID: \"639c6c1f-c3ef-44ad-bfba-7aa257d311bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495940-s889s" Jan 30 07:00:00 crc kubenswrapper[4520]: I0130 07:00:00.359203 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5src\" (UniqueName: \"kubernetes.io/projected/639c6c1f-c3ef-44ad-bfba-7aa257d311bf-kube-api-access-z5src\") pod \"collect-profiles-29495940-s889s\" (UID: \"639c6c1f-c3ef-44ad-bfba-7aa257d311bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495940-s889s" Jan 30 07:00:00 crc kubenswrapper[4520]: I0130 07:00:00.359308 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/639c6c1f-c3ef-44ad-bfba-7aa257d311bf-config-volume\") pod \"collect-profiles-29495940-s889s\" (UID: \"639c6c1f-c3ef-44ad-bfba-7aa257d311bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495940-s889s" Jan 30 07:00:00 crc kubenswrapper[4520]: I0130 07:00:00.360395 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/639c6c1f-c3ef-44ad-bfba-7aa257d311bf-config-volume\") pod \"collect-profiles-29495940-s889s\" (UID: \"639c6c1f-c3ef-44ad-bfba-7aa257d311bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495940-s889s" Jan 30 07:00:00 crc kubenswrapper[4520]: I0130 07:00:00.365236 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/639c6c1f-c3ef-44ad-bfba-7aa257d311bf-secret-volume\") pod \"collect-profiles-29495940-s889s\" (UID: \"639c6c1f-c3ef-44ad-bfba-7aa257d311bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495940-s889s" Jan 30 07:00:00 crc kubenswrapper[4520]: I0130 07:00:00.383757 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5src\" (UniqueName: \"kubernetes.io/projected/639c6c1f-c3ef-44ad-bfba-7aa257d311bf-kube-api-access-z5src\") pod \"collect-profiles-29495940-s889s\" (UID: \"639c6c1f-c3ef-44ad-bfba-7aa257d311bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495940-s889s" Jan 30 07:00:00 crc kubenswrapper[4520]: I0130 07:00:00.466686 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6dea2a52-9e2f-4a08-a6ca-f168ed7379db","Type":"ContainerStarted","Data":"bc44fa6146fb1b0b83ace31b3bd7ce0d6fa0b9d2fab148dd498f7e76c49d41bc"} Jan 30 07:00:00 crc kubenswrapper[4520]: I0130 07:00:00.473365 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"05c54a44-a9e7-4d9f-9758-a40dd10bf72e","Type":"ContainerStarted","Data":"9090ee2975472524121de592fdbdd606ce86eb683b486e8d28a84fcde5e09148"} Jan 30 07:00:00 crc kubenswrapper[4520]: I0130 07:00:00.473539 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="05c54a44-a9e7-4d9f-9758-a40dd10bf72e" containerName="glance-log" containerID="cri-o://04958fa502407b7065a4b6e5666ab50997cb61494ffc1982adb6694391d2032d" gracePeriod=30 Jan 30 07:00:00 crc kubenswrapper[4520]: I0130 07:00:00.473574 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="05c54a44-a9e7-4d9f-9758-a40dd10bf72e" containerName="glance-httpd" containerID="cri-o://9090ee2975472524121de592fdbdd606ce86eb683b486e8d28a84fcde5e09148" gracePeriod=30 Jan 30 07:00:00 crc kubenswrapper[4520]: I0130 07:00:00.485851 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495940-s889s" Jan 30 07:00:00 crc kubenswrapper[4520]: I0130 07:00:00.499220 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.499199746 podStartE2EDuration="5.499199746s" podCreationTimestamp="2026-01-30 06:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:00:00.491937512 +0000 UTC m=+914.120289694" watchObservedRunningTime="2026-01-30 07:00:00.499199746 +0000 UTC m=+914.127551928" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.373109 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495940-s889s"] Jan 30 07:00:01 crc kubenswrapper[4520]: W0130 07:00:01.397827 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod639c6c1f_c3ef_44ad_bfba_7aa257d311bf.slice/crio-bc9f94ee170f78236174d229090ee07b4baabe6f1f97c6e9e30aa25c2757b6be WatchSource:0}: Error finding container bc9f94ee170f78236174d229090ee07b4baabe6f1f97c6e9e30aa25c2757b6be: Status 404 returned error can't find the container with id bc9f94ee170f78236174d229090ee07b4baabe6f1f97c6e9e30aa25c2757b6be Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.399349 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.487162 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-httpd-run\") pod \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\" (UID: \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\") " Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.487207 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-internal-tls-certs\") pod \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\" (UID: \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\") " Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.487259 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-combined-ca-bundle\") pod \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\" (UID: \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\") " Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.487291 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-config-data\") pod \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\" (UID: \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\") " Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.487342 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-logs\") pod \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\" (UID: \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\") " Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.487502 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\" (UID: \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\") " Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.487559 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k728f\" (UniqueName: \"kubernetes.io/projected/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-kube-api-access-k728f\") pod \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\" (UID: \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\") " Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.487634 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-scripts\") pod \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\" (UID: \"05c54a44-a9e7-4d9f-9758-a40dd10bf72e\") " Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.489771 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "05c54a44-a9e7-4d9f-9758-a40dd10bf72e" (UID: "05c54a44-a9e7-4d9f-9758-a40dd10bf72e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.489937 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-logs" (OuterVolumeSpecName: "logs") pod "05c54a44-a9e7-4d9f-9758-a40dd10bf72e" (UID: "05c54a44-a9e7-4d9f-9758-a40dd10bf72e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.508487 4520 generic.go:334] "Generic (PLEG): container finished" podID="05c54a44-a9e7-4d9f-9758-a40dd10bf72e" containerID="9090ee2975472524121de592fdbdd606ce86eb683b486e8d28a84fcde5e09148" exitCode=143 Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.510885 4520 generic.go:334] "Generic (PLEG): container finished" podID="05c54a44-a9e7-4d9f-9758-a40dd10bf72e" containerID="04958fa502407b7065a4b6e5666ab50997cb61494ffc1982adb6694391d2032d" exitCode=143 Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.510959 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.510694 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-scripts" (OuterVolumeSpecName: "scripts") pod "05c54a44-a9e7-4d9f-9758-a40dd10bf72e" (UID: "05c54a44-a9e7-4d9f-9758-a40dd10bf72e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.510990 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"05c54a44-a9e7-4d9f-9758-a40dd10bf72e","Type":"ContainerDied","Data":"9090ee2975472524121de592fdbdd606ce86eb683b486e8d28a84fcde5e09148"} Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.511255 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"05c54a44-a9e7-4d9f-9758-a40dd10bf72e","Type":"ContainerDied","Data":"04958fa502407b7065a4b6e5666ab50997cb61494ffc1982adb6694391d2032d"} Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.511313 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"05c54a44-a9e7-4d9f-9758-a40dd10bf72e","Type":"ContainerDied","Data":"952ec1559286a59de52ffcb5e7a634e21f1371ed6dd4ab90f8921c9cb3f282ef"} Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.511369 4520 scope.go:117] "RemoveContainer" containerID="9090ee2975472524121de592fdbdd606ce86eb683b486e8d28a84fcde5e09148" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.523445 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495940-s889s" event={"ID":"639c6c1f-c3ef-44ad-bfba-7aa257d311bf","Type":"ContainerStarted","Data":"bc9f94ee170f78236174d229090ee07b4baabe6f1f97c6e9e30aa25c2757b6be"} Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.528746 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "05c54a44-a9e7-4d9f-9758-a40dd10bf72e" (UID: "05c54a44-a9e7-4d9f-9758-a40dd10bf72e"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.564025 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-kube-api-access-k728f" (OuterVolumeSpecName: "kube-api-access-k728f") pod "05c54a44-a9e7-4d9f-9758-a40dd10bf72e" (UID: "05c54a44-a9e7-4d9f-9758-a40dd10bf72e"). InnerVolumeSpecName "kube-api-access-k728f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.590705 4520 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-logs\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.590745 4520 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.590757 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k728f\" (UniqueName: \"kubernetes.io/projected/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-kube-api-access-k728f\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.590766 4520 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.590774 4520 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.593211 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05c54a44-a9e7-4d9f-9758-a40dd10bf72e" (UID: "05c54a44-a9e7-4d9f-9758-a40dd10bf72e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.625754 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-config-data" (OuterVolumeSpecName: "config-data") pod "05c54a44-a9e7-4d9f-9758-a40dd10bf72e" (UID: "05c54a44-a9e7-4d9f-9758-a40dd10bf72e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.633584 4520 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.651085 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "05c54a44-a9e7-4d9f-9758-a40dd10bf72e" (UID: "05c54a44-a9e7-4d9f-9758-a40dd10bf72e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.732087 4520 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.732381 4520 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.732395 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.732405 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05c54a44-a9e7-4d9f-9758-a40dd10bf72e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.746078 4520 scope.go:117] "RemoveContainer" containerID="04958fa502407b7065a4b6e5666ab50997cb61494ffc1982adb6694391d2032d" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.807503 4520 scope.go:117] "RemoveContainer" containerID="9090ee2975472524121de592fdbdd606ce86eb683b486e8d28a84fcde5e09148" Jan 30 07:00:01 crc kubenswrapper[4520]: E0130 07:00:01.809978 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9090ee2975472524121de592fdbdd606ce86eb683b486e8d28a84fcde5e09148\": container with ID starting with 9090ee2975472524121de592fdbdd606ce86eb683b486e8d28a84fcde5e09148 not found: ID does not exist" containerID="9090ee2975472524121de592fdbdd606ce86eb683b486e8d28a84fcde5e09148" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.810027 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9090ee2975472524121de592fdbdd606ce86eb683b486e8d28a84fcde5e09148"} err="failed to get container status \"9090ee2975472524121de592fdbdd606ce86eb683b486e8d28a84fcde5e09148\": rpc error: code = NotFound desc = could not find container \"9090ee2975472524121de592fdbdd606ce86eb683b486e8d28a84fcde5e09148\": container with ID starting with 9090ee2975472524121de592fdbdd606ce86eb683b486e8d28a84fcde5e09148 not found: ID does not exist" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.810055 4520 scope.go:117] "RemoveContainer" containerID="04958fa502407b7065a4b6e5666ab50997cb61494ffc1982adb6694391d2032d" Jan 30 07:00:01 crc kubenswrapper[4520]: E0130 07:00:01.810637 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04958fa502407b7065a4b6e5666ab50997cb61494ffc1982adb6694391d2032d\": container with ID starting with 04958fa502407b7065a4b6e5666ab50997cb61494ffc1982adb6694391d2032d not found: ID does not exist" containerID="04958fa502407b7065a4b6e5666ab50997cb61494ffc1982adb6694391d2032d" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.810699 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04958fa502407b7065a4b6e5666ab50997cb61494ffc1982adb6694391d2032d"} err="failed to get container status \"04958fa502407b7065a4b6e5666ab50997cb61494ffc1982adb6694391d2032d\": rpc error: code = NotFound desc = could not find container \"04958fa502407b7065a4b6e5666ab50997cb61494ffc1982adb6694391d2032d\": container with ID starting with 04958fa502407b7065a4b6e5666ab50997cb61494ffc1982adb6694391d2032d not found: ID does not exist" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.810733 4520 scope.go:117] "RemoveContainer" containerID="9090ee2975472524121de592fdbdd606ce86eb683b486e8d28a84fcde5e09148" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.813292 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9090ee2975472524121de592fdbdd606ce86eb683b486e8d28a84fcde5e09148"} err="failed to get container status \"9090ee2975472524121de592fdbdd606ce86eb683b486e8d28a84fcde5e09148\": rpc error: code = NotFound desc = could not find container \"9090ee2975472524121de592fdbdd606ce86eb683b486e8d28a84fcde5e09148\": container with ID starting with 9090ee2975472524121de592fdbdd606ce86eb683b486e8d28a84fcde5e09148 not found: ID does not exist" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.813339 4520 scope.go:117] "RemoveContainer" containerID="04958fa502407b7065a4b6e5666ab50997cb61494ffc1982adb6694391d2032d" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.813918 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04958fa502407b7065a4b6e5666ab50997cb61494ffc1982adb6694391d2032d"} err="failed to get container status \"04958fa502407b7065a4b6e5666ab50997cb61494ffc1982adb6694391d2032d\": rpc error: code = NotFound desc = could not find container \"04958fa502407b7065a4b6e5666ab50997cb61494ffc1982adb6694391d2032d\": container with ID starting with 04958fa502407b7065a4b6e5666ab50997cb61494ffc1982adb6694391d2032d not found: ID does not exist" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.886720 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.908282 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.950907 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 07:00:01 crc kubenswrapper[4520]: E0130 07:00:01.951331 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c54a44-a9e7-4d9f-9758-a40dd10bf72e" containerName="glance-httpd" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.951349 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c54a44-a9e7-4d9f-9758-a40dd10bf72e" containerName="glance-httpd" Jan 30 07:00:01 crc kubenswrapper[4520]: E0130 07:00:01.951380 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c54a44-a9e7-4d9f-9758-a40dd10bf72e" containerName="glance-log" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.951386 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c54a44-a9e7-4d9f-9758-a40dd10bf72e" containerName="glance-log" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.951575 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="05c54a44-a9e7-4d9f-9758-a40dd10bf72e" containerName="glance-log" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.951589 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="05c54a44-a9e7-4d9f-9758-a40dd10bf72e" containerName="glance-httpd" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.954969 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.961041 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.961267 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 07:00:01 crc kubenswrapper[4520]: I0130 07:00:01.964253 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 07:00:02 crc kubenswrapper[4520]: I0130 07:00:02.039869 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-logs\") pod \"glance-default-internal-api-0\" (UID: \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:02 crc kubenswrapper[4520]: I0130 07:00:02.039931 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jn7h\" (UniqueName: \"kubernetes.io/projected/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-kube-api-access-5jn7h\") pod \"glance-default-internal-api-0\" (UID: \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:02 crc kubenswrapper[4520]: I0130 07:00:02.039960 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:02 crc kubenswrapper[4520]: I0130 07:00:02.039979 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:02 crc kubenswrapper[4520]: I0130 07:00:02.040024 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:02 crc kubenswrapper[4520]: I0130 07:00:02.040040 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-config-data\") pod \"glance-default-internal-api-0\" (UID: \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:02 crc kubenswrapper[4520]: I0130 07:00:02.040103 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:02 crc kubenswrapper[4520]: I0130 07:00:02.040135 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-scripts\") pod \"glance-default-internal-api-0\" (UID: \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:02 crc kubenswrapper[4520]: I0130 07:00:02.142496 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:02 crc kubenswrapper[4520]: I0130 07:00:02.142554 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-config-data\") pod \"glance-default-internal-api-0\" (UID: \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:02 crc kubenswrapper[4520]: I0130 07:00:02.142595 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:02 crc kubenswrapper[4520]: I0130 07:00:02.142622 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-scripts\") pod \"glance-default-internal-api-0\" (UID: \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:02 crc kubenswrapper[4520]: I0130 07:00:02.142741 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jn7h\" (UniqueName: \"kubernetes.io/projected/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-kube-api-access-5jn7h\") pod \"glance-default-internal-api-0\" (UID: \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:02 crc kubenswrapper[4520]: I0130 07:00:02.142759 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-logs\") pod \"glance-default-internal-api-0\" (UID: \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:02 crc kubenswrapper[4520]: I0130 07:00:02.142778 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:02 crc kubenswrapper[4520]: I0130 07:00:02.142794 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:02 crc kubenswrapper[4520]: I0130 07:00:02.145082 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-logs\") pod \"glance-default-internal-api-0\" (UID: \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:02 crc kubenswrapper[4520]: I0130 07:00:02.145683 4520 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Jan 30 07:00:02 crc kubenswrapper[4520]: I0130 07:00:02.149622 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:02 crc kubenswrapper[4520]: I0130 07:00:02.156677 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-scripts\") pod \"glance-default-internal-api-0\" (UID: \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:02 crc kubenswrapper[4520]: I0130 07:00:02.166665 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:02 crc kubenswrapper[4520]: I0130 07:00:02.169784 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:02 crc kubenswrapper[4520]: I0130 07:00:02.170174 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-config-data\") pod \"glance-default-internal-api-0\" (UID: \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:02 crc kubenswrapper[4520]: I0130 07:00:02.174041 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jn7h\" (UniqueName: \"kubernetes.io/projected/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-kube-api-access-5jn7h\") pod \"glance-default-internal-api-0\" (UID: \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:02 crc kubenswrapper[4520]: I0130 07:00:02.181720 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:02 crc kubenswrapper[4520]: I0130 07:00:02.320371 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 07:00:02 crc kubenswrapper[4520]: I0130 07:00:02.567431 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6dea2a52-9e2f-4a08-a6ca-f168ed7379db","Type":"ContainerStarted","Data":"81a249d44778c19f37d9165922002adf1b05703dbf440b6a295804d604915b89"} Jan 30 07:00:02 crc kubenswrapper[4520]: I0130 07:00:02.567767 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6dea2a52-9e2f-4a08-a6ca-f168ed7379db" containerName="glance-log" containerID="cri-o://bc44fa6146fb1b0b83ace31b3bd7ce0d6fa0b9d2fab148dd498f7e76c49d41bc" gracePeriod=30 Jan 30 07:00:02 crc kubenswrapper[4520]: I0130 07:00:02.568093 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6dea2a52-9e2f-4a08-a6ca-f168ed7379db" containerName="glance-httpd" containerID="cri-o://81a249d44778c19f37d9165922002adf1b05703dbf440b6a295804d604915b89" gracePeriod=30 Jan 30 07:00:02 crc kubenswrapper[4520]: I0130 07:00:02.577403 4520 generic.go:334] "Generic (PLEG): container finished" podID="639c6c1f-c3ef-44ad-bfba-7aa257d311bf" containerID="a6af4381c8d7ffd1e2f1f3755b7858d30f154cafa305db2c976428f5fa638957" exitCode=0 Jan 30 07:00:02 crc kubenswrapper[4520]: I0130 07:00:02.577466 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495940-s889s" event={"ID":"639c6c1f-c3ef-44ad-bfba-7aa257d311bf","Type":"ContainerDied","Data":"a6af4381c8d7ffd1e2f1f3755b7858d30f154cafa305db2c976428f5fa638957"} Jan 30 07:00:02 crc kubenswrapper[4520]: I0130 07:00:02.597286 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.597273367 podStartE2EDuration="7.597273367s" podCreationTimestamp="2026-01-30 06:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:00:02.594627633 +0000 UTC m=+916.222979814" watchObservedRunningTime="2026-01-30 07:00:02.597273367 +0000 UTC m=+916.225625548" Jan 30 07:00:02 crc kubenswrapper[4520]: I0130 07:00:02.755486 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05c54a44-a9e7-4d9f-9758-a40dd10bf72e" path="/var/lib/kubelet/pods/05c54a44-a9e7-4d9f-9758-a40dd10bf72e/volumes" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.130386 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.307236 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.378362 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-httpd-run\") pod \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\" (UID: \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\") " Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.378449 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-scripts\") pod \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\" (UID: \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\") " Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.378488 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-public-tls-certs\") pod \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\" (UID: \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\") " Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.378566 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-combined-ca-bundle\") pod \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\" (UID: \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\") " Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.378713 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfhlp\" (UniqueName: \"kubernetes.io/projected/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-kube-api-access-pfhlp\") pod \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\" (UID: \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\") " Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.378839 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6dea2a52-9e2f-4a08-a6ca-f168ed7379db" (UID: "6dea2a52-9e2f-4a08-a6ca-f168ed7379db"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.379018 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-config-data\") pod \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\" (UID: \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\") " Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.379058 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\" (UID: \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\") " Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.379209 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-logs\") pod \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\" (UID: \"6dea2a52-9e2f-4a08-a6ca-f168ed7379db\") " Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.379981 4520 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.380307 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-logs" (OuterVolumeSpecName: "logs") pod "6dea2a52-9e2f-4a08-a6ca-f168ed7379db" (UID: "6dea2a52-9e2f-4a08-a6ca-f168ed7379db"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.388494 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "6dea2a52-9e2f-4a08-a6ca-f168ed7379db" (UID: "6dea2a52-9e2f-4a08-a6ca-f168ed7379db"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.394657 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-kube-api-access-pfhlp" (OuterVolumeSpecName: "kube-api-access-pfhlp") pod "6dea2a52-9e2f-4a08-a6ca-f168ed7379db" (UID: "6dea2a52-9e2f-4a08-a6ca-f168ed7379db"). InnerVolumeSpecName "kube-api-access-pfhlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.396702 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-scripts" (OuterVolumeSpecName: "scripts") pod "6dea2a52-9e2f-4a08-a6ca-f168ed7379db" (UID: "6dea2a52-9e2f-4a08-a6ca-f168ed7379db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.407727 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6dea2a52-9e2f-4a08-a6ca-f168ed7379db" (UID: "6dea2a52-9e2f-4a08-a6ca-f168ed7379db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.431872 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-config-data" (OuterVolumeSpecName: "config-data") pod "6dea2a52-9e2f-4a08-a6ca-f168ed7379db" (UID: "6dea2a52-9e2f-4a08-a6ca-f168ed7379db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.481938 4520 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.482130 4520 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-logs\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.482190 4520 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.482264 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.482877 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfhlp\" (UniqueName: \"kubernetes.io/projected/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-kube-api-access-pfhlp\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.482893 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.483630 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6dea2a52-9e2f-4a08-a6ca-f168ed7379db" (UID: "6dea2a52-9e2f-4a08-a6ca-f168ed7379db"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.509192 4520 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.585357 4520 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.585389 4520 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dea2a52-9e2f-4a08-a6ca-f168ed7379db-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.659090 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-68988b9b57-dgctl"] Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.677379 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"68675c3f-bc31-4c90-9cfc-a0cfb0e05046","Type":"ContainerStarted","Data":"f182a343f954af2767ac826792e0a760b611a744b47d2c789f3a0c0b32660012"} Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.701946 4520 generic.go:334] "Generic (PLEG): container finished" podID="df6d9500-f0bf-4aff-a6d9-86fcdc982d6c" containerID="ec8500b8477fefb4a8c65e86ad568dc4618d89be58fa29cf0eda07e8632c2b32" exitCode=0 Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.702015 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g6448" event={"ID":"df6d9500-f0bf-4aff-a6d9-86fcdc982d6c","Type":"ContainerDied","Data":"ec8500b8477fefb4a8c65e86ad568dc4618d89be58fa29cf0eda07e8632c2b32"} Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.706359 4520 generic.go:334] "Generic (PLEG): container finished" podID="6dea2a52-9e2f-4a08-a6ca-f168ed7379db" containerID="81a249d44778c19f37d9165922002adf1b05703dbf440b6a295804d604915b89" exitCode=0 Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.706382 4520 generic.go:334] "Generic (PLEG): container finished" podID="6dea2a52-9e2f-4a08-a6ca-f168ed7379db" containerID="bc44fa6146fb1b0b83ace31b3bd7ce0d6fa0b9d2fab148dd498f7e76c49d41bc" exitCode=143 Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.710639 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.712173 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6dea2a52-9e2f-4a08-a6ca-f168ed7379db","Type":"ContainerDied","Data":"81a249d44778c19f37d9165922002adf1b05703dbf440b6a295804d604915b89"} Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.712220 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6dea2a52-9e2f-4a08-a6ca-f168ed7379db","Type":"ContainerDied","Data":"bc44fa6146fb1b0b83ace31b3bd7ce0d6fa0b9d2fab148dd498f7e76c49d41bc"} Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.712233 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6dea2a52-9e2f-4a08-a6ca-f168ed7379db","Type":"ContainerDied","Data":"6165fa5fa12c28ce35801b2411844e83cdf4b78703a2bb2598e32af3803796d2"} Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.712249 4520 scope.go:117] "RemoveContainer" containerID="81a249d44778c19f37d9165922002adf1b05703dbf440b6a295804d604915b89" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.715574 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-c459697cb-g922m"] Jan 30 07:00:03 crc kubenswrapper[4520]: E0130 07:00:03.715971 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dea2a52-9e2f-4a08-a6ca-f168ed7379db" containerName="glance-log" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.715983 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dea2a52-9e2f-4a08-a6ca-f168ed7379db" containerName="glance-log" Jan 30 07:00:03 crc kubenswrapper[4520]: E0130 07:00:03.716001 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dea2a52-9e2f-4a08-a6ca-f168ed7379db" containerName="glance-httpd" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.716006 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dea2a52-9e2f-4a08-a6ca-f168ed7379db" containerName="glance-httpd" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.716203 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dea2a52-9e2f-4a08-a6ca-f168ed7379db" containerName="glance-httpd" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.716226 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dea2a52-9e2f-4a08-a6ca-f168ed7379db" containerName="glance-log" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.717074 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c459697cb-g922m" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.730671 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.785020 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-c459697cb-g922m"] Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.802543 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3380703e-5659-4040-8b43-e3ada0eaa6b6-horizon-secret-key\") pod \"horizon-c459697cb-g922m\" (UID: \"3380703e-5659-4040-8b43-e3ada0eaa6b6\") " pod="openstack/horizon-c459697cb-g922m" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.802682 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3380703e-5659-4040-8b43-e3ada0eaa6b6-combined-ca-bundle\") pod \"horizon-c459697cb-g922m\" (UID: \"3380703e-5659-4040-8b43-e3ada0eaa6b6\") " pod="openstack/horizon-c459697cb-g922m" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.802739 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3380703e-5659-4040-8b43-e3ada0eaa6b6-horizon-tls-certs\") pod \"horizon-c459697cb-g922m\" (UID: \"3380703e-5659-4040-8b43-e3ada0eaa6b6\") " pod="openstack/horizon-c459697cb-g922m" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.802823 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp8xh\" (UniqueName: \"kubernetes.io/projected/3380703e-5659-4040-8b43-e3ada0eaa6b6-kube-api-access-pp8xh\") pod \"horizon-c459697cb-g922m\" (UID: \"3380703e-5659-4040-8b43-e3ada0eaa6b6\") " pod="openstack/horizon-c459697cb-g922m" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.802909 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3380703e-5659-4040-8b43-e3ada0eaa6b6-scripts\") pod \"horizon-c459697cb-g922m\" (UID: \"3380703e-5659-4040-8b43-e3ada0eaa6b6\") " pod="openstack/horizon-c459697cb-g922m" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.802972 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3380703e-5659-4040-8b43-e3ada0eaa6b6-logs\") pod \"horizon-c459697cb-g922m\" (UID: \"3380703e-5659-4040-8b43-e3ada0eaa6b6\") " pod="openstack/horizon-c459697cb-g922m" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.803013 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3380703e-5659-4040-8b43-e3ada0eaa6b6-config-data\") pod \"horizon-c459697cb-g922m\" (UID: \"3380703e-5659-4040-8b43-e3ada0eaa6b6\") " pod="openstack/horizon-c459697cb-g922m" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.894207 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.916734 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.929031 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3380703e-5659-4040-8b43-e3ada0eaa6b6-horizon-tls-certs\") pod \"horizon-c459697cb-g922m\" (UID: \"3380703e-5659-4040-8b43-e3ada0eaa6b6\") " pod="openstack/horizon-c459697cb-g922m" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.929947 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp8xh\" (UniqueName: \"kubernetes.io/projected/3380703e-5659-4040-8b43-e3ada0eaa6b6-kube-api-access-pp8xh\") pod \"horizon-c459697cb-g922m\" (UID: \"3380703e-5659-4040-8b43-e3ada0eaa6b6\") " pod="openstack/horizon-c459697cb-g922m" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.932079 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3380703e-5659-4040-8b43-e3ada0eaa6b6-scripts\") pod \"horizon-c459697cb-g922m\" (UID: \"3380703e-5659-4040-8b43-e3ada0eaa6b6\") " pod="openstack/horizon-c459697cb-g922m" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.932415 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3380703e-5659-4040-8b43-e3ada0eaa6b6-logs\") pod \"horizon-c459697cb-g922m\" (UID: \"3380703e-5659-4040-8b43-e3ada0eaa6b6\") " pod="openstack/horizon-c459697cb-g922m" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.932490 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3380703e-5659-4040-8b43-e3ada0eaa6b6-config-data\") pod \"horizon-c459697cb-g922m\" (UID: \"3380703e-5659-4040-8b43-e3ada0eaa6b6\") " pod="openstack/horizon-c459697cb-g922m" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.932875 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3380703e-5659-4040-8b43-e3ada0eaa6b6-horizon-secret-key\") pod \"horizon-c459697cb-g922m\" (UID: \"3380703e-5659-4040-8b43-e3ada0eaa6b6\") " pod="openstack/horizon-c459697cb-g922m" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.935124 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3380703e-5659-4040-8b43-e3ada0eaa6b6-scripts\") pod \"horizon-c459697cb-g922m\" (UID: \"3380703e-5659-4040-8b43-e3ada0eaa6b6\") " pod="openstack/horizon-c459697cb-g922m" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.936926 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3380703e-5659-4040-8b43-e3ada0eaa6b6-combined-ca-bundle\") pod \"horizon-c459697cb-g922m\" (UID: \"3380703e-5659-4040-8b43-e3ada0eaa6b6\") " pod="openstack/horizon-c459697cb-g922m" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.939780 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3380703e-5659-4040-8b43-e3ada0eaa6b6-logs\") pod \"horizon-c459697cb-g922m\" (UID: \"3380703e-5659-4040-8b43-e3ada0eaa6b6\") " pod="openstack/horizon-c459697cb-g922m" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.942607 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3380703e-5659-4040-8b43-e3ada0eaa6b6-config-data\") pod \"horizon-c459697cb-g922m\" (UID: \"3380703e-5659-4040-8b43-e3ada0eaa6b6\") " pod="openstack/horizon-c459697cb-g922m" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.955505 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3380703e-5659-4040-8b43-e3ada0eaa6b6-horizon-secret-key\") pod \"horizon-c459697cb-g922m\" (UID: \"3380703e-5659-4040-8b43-e3ada0eaa6b6\") " pod="openstack/horizon-c459697cb-g922m" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.956075 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp8xh\" (UniqueName: \"kubernetes.io/projected/3380703e-5659-4040-8b43-e3ada0eaa6b6-kube-api-access-pp8xh\") pod \"horizon-c459697cb-g922m\" (UID: \"3380703e-5659-4040-8b43-e3ada0eaa6b6\") " pod="openstack/horizon-c459697cb-g922m" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.956153 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3380703e-5659-4040-8b43-e3ada0eaa6b6-horizon-tls-certs\") pod \"horizon-c459697cb-g922m\" (UID: \"3380703e-5659-4040-8b43-e3ada0eaa6b6\") " pod="openstack/horizon-c459697cb-g922m" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.964227 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3380703e-5659-4040-8b43-e3ada0eaa6b6-combined-ca-bundle\") pod \"horizon-c459697cb-g922m\" (UID: \"3380703e-5659-4040-8b43-e3ada0eaa6b6\") " pod="openstack/horizon-c459697cb-g922m" Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.966397 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 07:00:03 crc kubenswrapper[4520]: I0130 07:00:03.994238 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-759c7d779-ckntp"] Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.004574 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.006240 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.008992 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.009295 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.025451 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.045037 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-d9dd85bbd-2g75n"] Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.051663 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d9dd85bbd-2g75n" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.055992 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-d9dd85bbd-2g75n"] Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.145632 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/787adbf3-a537-453d-a7fc-efbbdec67245-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"787adbf3-a537-453d-a7fc-efbbdec67245\") " pod="openstack/glance-default-external-api-0" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.145701 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"787adbf3-a537-453d-a7fc-efbbdec67245\") " pod="openstack/glance-default-external-api-0" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.145721 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/787adbf3-a537-453d-a7fc-efbbdec67245-logs\") pod \"glance-default-external-api-0\" (UID: \"787adbf3-a537-453d-a7fc-efbbdec67245\") " pod="openstack/glance-default-external-api-0" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.145735 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/787adbf3-a537-453d-a7fc-efbbdec67245-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"787adbf3-a537-453d-a7fc-efbbdec67245\") " pod="openstack/glance-default-external-api-0" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.145759 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bcc0bac1-6294-432a-8703-fbef10b2a44f-horizon-secret-key\") pod \"horizon-d9dd85bbd-2g75n\" (UID: \"bcc0bac1-6294-432a-8703-fbef10b2a44f\") " pod="openstack/horizon-d9dd85bbd-2g75n" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.145775 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcc0bac1-6294-432a-8703-fbef10b2a44f-horizon-tls-certs\") pod \"horizon-d9dd85bbd-2g75n\" (UID: \"bcc0bac1-6294-432a-8703-fbef10b2a44f\") " pod="openstack/horizon-d9dd85bbd-2g75n" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.145795 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bcc0bac1-6294-432a-8703-fbef10b2a44f-config-data\") pod \"horizon-d9dd85bbd-2g75n\" (UID: \"bcc0bac1-6294-432a-8703-fbef10b2a44f\") " pod="openstack/horizon-d9dd85bbd-2g75n" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.145810 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcc0bac1-6294-432a-8703-fbef10b2a44f-combined-ca-bundle\") pod \"horizon-d9dd85bbd-2g75n\" (UID: \"bcc0bac1-6294-432a-8703-fbef10b2a44f\") " pod="openstack/horizon-d9dd85bbd-2g75n" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.145825 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bcc0bac1-6294-432a-8703-fbef10b2a44f-scripts\") pod \"horizon-d9dd85bbd-2g75n\" (UID: \"bcc0bac1-6294-432a-8703-fbef10b2a44f\") " pod="openstack/horizon-d9dd85bbd-2g75n" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.145840 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/787adbf3-a537-453d-a7fc-efbbdec67245-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"787adbf3-a537-453d-a7fc-efbbdec67245\") " pod="openstack/glance-default-external-api-0" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.145869 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/787adbf3-a537-453d-a7fc-efbbdec67245-scripts\") pod \"glance-default-external-api-0\" (UID: \"787adbf3-a537-453d-a7fc-efbbdec67245\") " pod="openstack/glance-default-external-api-0" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.145886 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n8qb\" (UniqueName: \"kubernetes.io/projected/bcc0bac1-6294-432a-8703-fbef10b2a44f-kube-api-access-6n8qb\") pod \"horizon-d9dd85bbd-2g75n\" (UID: \"bcc0bac1-6294-432a-8703-fbef10b2a44f\") " pod="openstack/horizon-d9dd85bbd-2g75n" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.145909 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz5b9\" (UniqueName: \"kubernetes.io/projected/787adbf3-a537-453d-a7fc-efbbdec67245-kube-api-access-sz5b9\") pod \"glance-default-external-api-0\" (UID: \"787adbf3-a537-453d-a7fc-efbbdec67245\") " pod="openstack/glance-default-external-api-0" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.145923 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/787adbf3-a537-453d-a7fc-efbbdec67245-config-data\") pod \"glance-default-external-api-0\" (UID: \"787adbf3-a537-453d-a7fc-efbbdec67245\") " pod="openstack/glance-default-external-api-0" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.145944 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bcc0bac1-6294-432a-8703-fbef10b2a44f-logs\") pod \"horizon-d9dd85bbd-2g75n\" (UID: \"bcc0bac1-6294-432a-8703-fbef10b2a44f\") " pod="openstack/horizon-d9dd85bbd-2g75n" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.167955 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c459697cb-g922m" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.248076 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/787adbf3-a537-453d-a7fc-efbbdec67245-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"787adbf3-a537-453d-a7fc-efbbdec67245\") " pod="openstack/glance-default-external-api-0" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.248139 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"787adbf3-a537-453d-a7fc-efbbdec67245\") " pod="openstack/glance-default-external-api-0" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.248168 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/787adbf3-a537-453d-a7fc-efbbdec67245-logs\") pod \"glance-default-external-api-0\" (UID: \"787adbf3-a537-453d-a7fc-efbbdec67245\") " pod="openstack/glance-default-external-api-0" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.248182 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/787adbf3-a537-453d-a7fc-efbbdec67245-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"787adbf3-a537-453d-a7fc-efbbdec67245\") " pod="openstack/glance-default-external-api-0" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.248206 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bcc0bac1-6294-432a-8703-fbef10b2a44f-horizon-secret-key\") pod \"horizon-d9dd85bbd-2g75n\" (UID: \"bcc0bac1-6294-432a-8703-fbef10b2a44f\") " pod="openstack/horizon-d9dd85bbd-2g75n" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.248223 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcc0bac1-6294-432a-8703-fbef10b2a44f-horizon-tls-certs\") pod \"horizon-d9dd85bbd-2g75n\" (UID: \"bcc0bac1-6294-432a-8703-fbef10b2a44f\") " pod="openstack/horizon-d9dd85bbd-2g75n" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.248243 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bcc0bac1-6294-432a-8703-fbef10b2a44f-config-data\") pod \"horizon-d9dd85bbd-2g75n\" (UID: \"bcc0bac1-6294-432a-8703-fbef10b2a44f\") " pod="openstack/horizon-d9dd85bbd-2g75n" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.248259 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcc0bac1-6294-432a-8703-fbef10b2a44f-combined-ca-bundle\") pod \"horizon-d9dd85bbd-2g75n\" (UID: \"bcc0bac1-6294-432a-8703-fbef10b2a44f\") " pod="openstack/horizon-d9dd85bbd-2g75n" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.248272 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bcc0bac1-6294-432a-8703-fbef10b2a44f-scripts\") pod \"horizon-d9dd85bbd-2g75n\" (UID: \"bcc0bac1-6294-432a-8703-fbef10b2a44f\") " pod="openstack/horizon-d9dd85bbd-2g75n" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.248287 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/787adbf3-a537-453d-a7fc-efbbdec67245-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"787adbf3-a537-453d-a7fc-efbbdec67245\") " pod="openstack/glance-default-external-api-0" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.248311 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/787adbf3-a537-453d-a7fc-efbbdec67245-scripts\") pod \"glance-default-external-api-0\" (UID: \"787adbf3-a537-453d-a7fc-efbbdec67245\") " pod="openstack/glance-default-external-api-0" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.248326 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n8qb\" (UniqueName: \"kubernetes.io/projected/bcc0bac1-6294-432a-8703-fbef10b2a44f-kube-api-access-6n8qb\") pod \"horizon-d9dd85bbd-2g75n\" (UID: \"bcc0bac1-6294-432a-8703-fbef10b2a44f\") " pod="openstack/horizon-d9dd85bbd-2g75n" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.248351 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz5b9\" (UniqueName: \"kubernetes.io/projected/787adbf3-a537-453d-a7fc-efbbdec67245-kube-api-access-sz5b9\") pod \"glance-default-external-api-0\" (UID: \"787adbf3-a537-453d-a7fc-efbbdec67245\") " pod="openstack/glance-default-external-api-0" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.248366 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/787adbf3-a537-453d-a7fc-efbbdec67245-config-data\") pod \"glance-default-external-api-0\" (UID: \"787adbf3-a537-453d-a7fc-efbbdec67245\") " pod="openstack/glance-default-external-api-0" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.248384 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bcc0bac1-6294-432a-8703-fbef10b2a44f-logs\") pod \"horizon-d9dd85bbd-2g75n\" (UID: \"bcc0bac1-6294-432a-8703-fbef10b2a44f\") " pod="openstack/horizon-d9dd85bbd-2g75n" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.249453 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/787adbf3-a537-453d-a7fc-efbbdec67245-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"787adbf3-a537-453d-a7fc-efbbdec67245\") " pod="openstack/glance-default-external-api-0" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.249901 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bcc0bac1-6294-432a-8703-fbef10b2a44f-scripts\") pod \"horizon-d9dd85bbd-2g75n\" (UID: \"bcc0bac1-6294-432a-8703-fbef10b2a44f\") " pod="openstack/horizon-d9dd85bbd-2g75n" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.250024 4520 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"787adbf3-a537-453d-a7fc-efbbdec67245\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.252411 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bcc0bac1-6294-432a-8703-fbef10b2a44f-logs\") pod \"horizon-d9dd85bbd-2g75n\" (UID: \"bcc0bac1-6294-432a-8703-fbef10b2a44f\") " pod="openstack/horizon-d9dd85bbd-2g75n" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.252742 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/787adbf3-a537-453d-a7fc-efbbdec67245-logs\") pod \"glance-default-external-api-0\" (UID: \"787adbf3-a537-453d-a7fc-efbbdec67245\") " pod="openstack/glance-default-external-api-0" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.257876 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/787adbf3-a537-453d-a7fc-efbbdec67245-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"787adbf3-a537-453d-a7fc-efbbdec67245\") " pod="openstack/glance-default-external-api-0" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.258145 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bcc0bac1-6294-432a-8703-fbef10b2a44f-config-data\") pod \"horizon-d9dd85bbd-2g75n\" (UID: \"bcc0bac1-6294-432a-8703-fbef10b2a44f\") " pod="openstack/horizon-d9dd85bbd-2g75n" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.258595 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/787adbf3-a537-453d-a7fc-efbbdec67245-scripts\") pod \"glance-default-external-api-0\" (UID: \"787adbf3-a537-453d-a7fc-efbbdec67245\") " pod="openstack/glance-default-external-api-0" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.259237 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/787adbf3-a537-453d-a7fc-efbbdec67245-config-data\") pod \"glance-default-external-api-0\" (UID: \"787adbf3-a537-453d-a7fc-efbbdec67245\") " pod="openstack/glance-default-external-api-0" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.259381 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/787adbf3-a537-453d-a7fc-efbbdec67245-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"787adbf3-a537-453d-a7fc-efbbdec67245\") " pod="openstack/glance-default-external-api-0" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.259434 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bcc0bac1-6294-432a-8703-fbef10b2a44f-horizon-secret-key\") pod \"horizon-d9dd85bbd-2g75n\" (UID: \"bcc0bac1-6294-432a-8703-fbef10b2a44f\") " pod="openstack/horizon-d9dd85bbd-2g75n" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.267377 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcc0bac1-6294-432a-8703-fbef10b2a44f-combined-ca-bundle\") pod \"horizon-d9dd85bbd-2g75n\" (UID: \"bcc0bac1-6294-432a-8703-fbef10b2a44f\") " pod="openstack/horizon-d9dd85bbd-2g75n" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.268053 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcc0bac1-6294-432a-8703-fbef10b2a44f-horizon-tls-certs\") pod \"horizon-d9dd85bbd-2g75n\" (UID: \"bcc0bac1-6294-432a-8703-fbef10b2a44f\") " pod="openstack/horizon-d9dd85bbd-2g75n" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.275390 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n8qb\" (UniqueName: \"kubernetes.io/projected/bcc0bac1-6294-432a-8703-fbef10b2a44f-kube-api-access-6n8qb\") pod \"horizon-d9dd85bbd-2g75n\" (UID: \"bcc0bac1-6294-432a-8703-fbef10b2a44f\") " pod="openstack/horizon-d9dd85bbd-2g75n" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.276781 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz5b9\" (UniqueName: \"kubernetes.io/projected/787adbf3-a537-453d-a7fc-efbbdec67245-kube-api-access-sz5b9\") pod \"glance-default-external-api-0\" (UID: \"787adbf3-a537-453d-a7fc-efbbdec67245\") " pod="openstack/glance-default-external-api-0" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.298248 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"787adbf3-a537-453d-a7fc-efbbdec67245\") " pod="openstack/glance-default-external-api-0" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.352296 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.371087 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d9dd85bbd-2g75n" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.726553 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dea2a52-9e2f-4a08-a6ca-f168ed7379db" path="/var/lib/kubelet/pods/6dea2a52-9e2f-4a08-a6ca-f168ed7379db/volumes" Jan 30 07:00:04 crc kubenswrapper[4520]: I0130 07:00:04.740756 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"68675c3f-bc31-4c90-9cfc-a0cfb0e05046","Type":"ContainerStarted","Data":"87080c8365ec46d2b7b53e548a25ffea9342cea05d1f9fef2affda0c0a73c9a8"} Jan 30 07:00:05 crc kubenswrapper[4520]: I0130 07:00:05.446689 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c8c7b95dc-bv8zz" Jan 30 07:00:05 crc kubenswrapper[4520]: I0130 07:00:05.510269 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bcc75fb87-pcx4j"] Jan 30 07:00:05 crc kubenswrapper[4520]: I0130 07:00:05.510565 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bcc75fb87-pcx4j" podUID="20e16608-f957-4e8c-b9d2-63718bd0342e" containerName="dnsmasq-dns" containerID="cri-o://ce05a23d78b97a2b22eb56697a31bdacb3a51060391208afe0914e8aec8db6f5" gracePeriod=10 Jan 30 07:00:05 crc kubenswrapper[4520]: I0130 07:00:05.755156 4520 generic.go:334] "Generic (PLEG): container finished" podID="20e16608-f957-4e8c-b9d2-63718bd0342e" containerID="ce05a23d78b97a2b22eb56697a31bdacb3a51060391208afe0914e8aec8db6f5" exitCode=0 Jan 30 07:00:05 crc kubenswrapper[4520]: I0130 07:00:05.755358 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bcc75fb87-pcx4j" event={"ID":"20e16608-f957-4e8c-b9d2-63718bd0342e","Type":"ContainerDied","Data":"ce05a23d78b97a2b22eb56697a31bdacb3a51060391208afe0914e8aec8db6f5"} Jan 30 07:00:08 crc kubenswrapper[4520]: I0130 07:00:08.847814 4520 scope.go:117] "RemoveContainer" containerID="bc44fa6146fb1b0b83ace31b3bd7ce0d6fa0b9d2fab148dd498f7e76c49d41bc" Jan 30 07:00:08 crc kubenswrapper[4520]: I0130 07:00:08.949876 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g6448" Jan 30 07:00:08 crc kubenswrapper[4520]: I0130 07:00:08.953697 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495940-s889s" Jan 30 07:00:08 crc kubenswrapper[4520]: I0130 07:00:08.993456 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df6d9500-f0bf-4aff-a6d9-86fcdc982d6c-fernet-keys\") pod \"df6d9500-f0bf-4aff-a6d9-86fcdc982d6c\" (UID: \"df6d9500-f0bf-4aff-a6d9-86fcdc982d6c\") " Jan 30 07:00:08 crc kubenswrapper[4520]: I0130 07:00:08.993600 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5src\" (UniqueName: \"kubernetes.io/projected/639c6c1f-c3ef-44ad-bfba-7aa257d311bf-kube-api-access-z5src\") pod \"639c6c1f-c3ef-44ad-bfba-7aa257d311bf\" (UID: \"639c6c1f-c3ef-44ad-bfba-7aa257d311bf\") " Jan 30 07:00:08 crc kubenswrapper[4520]: I0130 07:00:08.993625 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df6d9500-f0bf-4aff-a6d9-86fcdc982d6c-scripts\") pod \"df6d9500-f0bf-4aff-a6d9-86fcdc982d6c\" (UID: \"df6d9500-f0bf-4aff-a6d9-86fcdc982d6c\") " Jan 30 07:00:08 crc kubenswrapper[4520]: I0130 07:00:08.993645 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lml8n\" (UniqueName: \"kubernetes.io/projected/df6d9500-f0bf-4aff-a6d9-86fcdc982d6c-kube-api-access-lml8n\") pod \"df6d9500-f0bf-4aff-a6d9-86fcdc982d6c\" (UID: \"df6d9500-f0bf-4aff-a6d9-86fcdc982d6c\") " Jan 30 07:00:08 crc kubenswrapper[4520]: I0130 07:00:08.993680 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df6d9500-f0bf-4aff-a6d9-86fcdc982d6c-config-data\") pod \"df6d9500-f0bf-4aff-a6d9-86fcdc982d6c\" (UID: \"df6d9500-f0bf-4aff-a6d9-86fcdc982d6c\") " Jan 30 07:00:08 crc kubenswrapper[4520]: I0130 07:00:08.993730 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/639c6c1f-c3ef-44ad-bfba-7aa257d311bf-config-volume\") pod \"639c6c1f-c3ef-44ad-bfba-7aa257d311bf\" (UID: \"639c6c1f-c3ef-44ad-bfba-7aa257d311bf\") " Jan 30 07:00:08 crc kubenswrapper[4520]: I0130 07:00:08.993747 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/639c6c1f-c3ef-44ad-bfba-7aa257d311bf-secret-volume\") pod \"639c6c1f-c3ef-44ad-bfba-7aa257d311bf\" (UID: \"639c6c1f-c3ef-44ad-bfba-7aa257d311bf\") " Jan 30 07:00:08 crc kubenswrapper[4520]: I0130 07:00:08.993769 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df6d9500-f0bf-4aff-a6d9-86fcdc982d6c-combined-ca-bundle\") pod \"df6d9500-f0bf-4aff-a6d9-86fcdc982d6c\" (UID: \"df6d9500-f0bf-4aff-a6d9-86fcdc982d6c\") " Jan 30 07:00:08 crc kubenswrapper[4520]: I0130 07:00:08.993794 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df6d9500-f0bf-4aff-a6d9-86fcdc982d6c-credential-keys\") pod \"df6d9500-f0bf-4aff-a6d9-86fcdc982d6c\" (UID: \"df6d9500-f0bf-4aff-a6d9-86fcdc982d6c\") " Jan 30 07:00:08 crc kubenswrapper[4520]: I0130 07:00:08.997532 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df6d9500-f0bf-4aff-a6d9-86fcdc982d6c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "df6d9500-f0bf-4aff-a6d9-86fcdc982d6c" (UID: "df6d9500-f0bf-4aff-a6d9-86fcdc982d6c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:00:08 crc kubenswrapper[4520]: I0130 07:00:08.999735 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df6d9500-f0bf-4aff-a6d9-86fcdc982d6c-kube-api-access-lml8n" (OuterVolumeSpecName: "kube-api-access-lml8n") pod "df6d9500-f0bf-4aff-a6d9-86fcdc982d6c" (UID: "df6d9500-f0bf-4aff-a6d9-86fcdc982d6c"). InnerVolumeSpecName "kube-api-access-lml8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:00:09 crc kubenswrapper[4520]: I0130 07:00:09.002635 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/639c6c1f-c3ef-44ad-bfba-7aa257d311bf-config-volume" (OuterVolumeSpecName: "config-volume") pod "639c6c1f-c3ef-44ad-bfba-7aa257d311bf" (UID: "639c6c1f-c3ef-44ad-bfba-7aa257d311bf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:00:09 crc kubenswrapper[4520]: I0130 07:00:09.004083 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df6d9500-f0bf-4aff-a6d9-86fcdc982d6c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "df6d9500-f0bf-4aff-a6d9-86fcdc982d6c" (UID: "df6d9500-f0bf-4aff-a6d9-86fcdc982d6c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:00:09 crc kubenswrapper[4520]: I0130 07:00:09.005715 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/639c6c1f-c3ef-44ad-bfba-7aa257d311bf-kube-api-access-z5src" (OuterVolumeSpecName: "kube-api-access-z5src") pod "639c6c1f-c3ef-44ad-bfba-7aa257d311bf" (UID: "639c6c1f-c3ef-44ad-bfba-7aa257d311bf"). InnerVolumeSpecName "kube-api-access-z5src". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:00:09 crc kubenswrapper[4520]: I0130 07:00:09.007175 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df6d9500-f0bf-4aff-a6d9-86fcdc982d6c-scripts" (OuterVolumeSpecName: "scripts") pod "df6d9500-f0bf-4aff-a6d9-86fcdc982d6c" (UID: "df6d9500-f0bf-4aff-a6d9-86fcdc982d6c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:00:09 crc kubenswrapper[4520]: I0130 07:00:09.015685 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/639c6c1f-c3ef-44ad-bfba-7aa257d311bf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "639c6c1f-c3ef-44ad-bfba-7aa257d311bf" (UID: "639c6c1f-c3ef-44ad-bfba-7aa257d311bf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:00:09 crc kubenswrapper[4520]: I0130 07:00:09.020700 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df6d9500-f0bf-4aff-a6d9-86fcdc982d6c-config-data" (OuterVolumeSpecName: "config-data") pod "df6d9500-f0bf-4aff-a6d9-86fcdc982d6c" (UID: "df6d9500-f0bf-4aff-a6d9-86fcdc982d6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:00:09 crc kubenswrapper[4520]: I0130 07:00:09.025220 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df6d9500-f0bf-4aff-a6d9-86fcdc982d6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df6d9500-f0bf-4aff-a6d9-86fcdc982d6c" (UID: "df6d9500-f0bf-4aff-a6d9-86fcdc982d6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:00:09 crc kubenswrapper[4520]: I0130 07:00:09.095705 4520 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df6d9500-f0bf-4aff-a6d9-86fcdc982d6c-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:09 crc kubenswrapper[4520]: I0130 07:00:09.095732 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5src\" (UniqueName: \"kubernetes.io/projected/639c6c1f-c3ef-44ad-bfba-7aa257d311bf-kube-api-access-z5src\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:09 crc kubenswrapper[4520]: I0130 07:00:09.095743 4520 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df6d9500-f0bf-4aff-a6d9-86fcdc982d6c-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:09 crc kubenswrapper[4520]: I0130 07:00:09.095753 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lml8n\" (UniqueName: \"kubernetes.io/projected/df6d9500-f0bf-4aff-a6d9-86fcdc982d6c-kube-api-access-lml8n\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:09 crc kubenswrapper[4520]: I0130 07:00:09.095762 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df6d9500-f0bf-4aff-a6d9-86fcdc982d6c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:09 crc kubenswrapper[4520]: I0130 07:00:09.095772 4520 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/639c6c1f-c3ef-44ad-bfba-7aa257d311bf-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:09 crc kubenswrapper[4520]: I0130 07:00:09.095783 4520 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/639c6c1f-c3ef-44ad-bfba-7aa257d311bf-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:09 crc kubenswrapper[4520]: I0130 07:00:09.095791 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df6d9500-f0bf-4aff-a6d9-86fcdc982d6c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:09 crc kubenswrapper[4520]: I0130 07:00:09.095799 4520 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df6d9500-f0bf-4aff-a6d9-86fcdc982d6c-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:09 crc kubenswrapper[4520]: I0130 07:00:09.834763 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495940-s889s" Jan 30 07:00:09 crc kubenswrapper[4520]: I0130 07:00:09.834790 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495940-s889s" event={"ID":"639c6c1f-c3ef-44ad-bfba-7aa257d311bf","Type":"ContainerDied","Data":"bc9f94ee170f78236174d229090ee07b4baabe6f1f97c6e9e30aa25c2757b6be"} Jan 30 07:00:09 crc kubenswrapper[4520]: I0130 07:00:09.835358 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc9f94ee170f78236174d229090ee07b4baabe6f1f97c6e9e30aa25c2757b6be" Jan 30 07:00:09 crc kubenswrapper[4520]: I0130 07:00:09.838542 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g6448" event={"ID":"df6d9500-f0bf-4aff-a6d9-86fcdc982d6c","Type":"ContainerDied","Data":"abf4148b55419adfa815923eb33f9cc40f8a6c067fb55cdacbe97e3bbd3abee0"} Jan 30 07:00:09 crc kubenswrapper[4520]: I0130 07:00:09.838641 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abf4148b55419adfa815923eb33f9cc40f8a6c067fb55cdacbe97e3bbd3abee0" Jan 30 07:00:09 crc kubenswrapper[4520]: I0130 07:00:09.838578 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g6448" Jan 30 07:00:10 crc kubenswrapper[4520]: I0130 07:00:10.049219 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-g6448"] Jan 30 07:00:10 crc kubenswrapper[4520]: I0130 07:00:10.055059 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-g6448"] Jan 30 07:00:10 crc kubenswrapper[4520]: I0130 07:00:10.133854 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-zrdtq"] Jan 30 07:00:10 crc kubenswrapper[4520]: E0130 07:00:10.134249 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="639c6c1f-c3ef-44ad-bfba-7aa257d311bf" containerName="collect-profiles" Jan 30 07:00:10 crc kubenswrapper[4520]: I0130 07:00:10.134262 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="639c6c1f-c3ef-44ad-bfba-7aa257d311bf" containerName="collect-profiles" Jan 30 07:00:10 crc kubenswrapper[4520]: E0130 07:00:10.134301 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df6d9500-f0bf-4aff-a6d9-86fcdc982d6c" containerName="keystone-bootstrap" Jan 30 07:00:10 crc kubenswrapper[4520]: I0130 07:00:10.134307 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="df6d9500-f0bf-4aff-a6d9-86fcdc982d6c" containerName="keystone-bootstrap" Jan 30 07:00:10 crc kubenswrapper[4520]: I0130 07:00:10.134492 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="df6d9500-f0bf-4aff-a6d9-86fcdc982d6c" containerName="keystone-bootstrap" Jan 30 07:00:10 crc kubenswrapper[4520]: I0130 07:00:10.134710 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="639c6c1f-c3ef-44ad-bfba-7aa257d311bf" containerName="collect-profiles" Jan 30 07:00:10 crc kubenswrapper[4520]: I0130 07:00:10.135330 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zrdtq" Jan 30 07:00:10 crc kubenswrapper[4520]: I0130 07:00:10.137961 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 07:00:10 crc kubenswrapper[4520]: I0130 07:00:10.138950 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 07:00:10 crc kubenswrapper[4520]: I0130 07:00:10.139060 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 07:00:10 crc kubenswrapper[4520]: I0130 07:00:10.139115 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 07:00:10 crc kubenswrapper[4520]: I0130 07:00:10.139303 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jddpd" Jan 30 07:00:10 crc kubenswrapper[4520]: I0130 07:00:10.146837 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zrdtq"] Jan 30 07:00:10 crc kubenswrapper[4520]: I0130 07:00:10.327617 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df706708-e03c-4d6e-ac65-229a419d653f-combined-ca-bundle\") pod \"keystone-bootstrap-zrdtq\" (UID: \"df706708-e03c-4d6e-ac65-229a419d653f\") " pod="openstack/keystone-bootstrap-zrdtq" Jan 30 07:00:10 crc kubenswrapper[4520]: I0130 07:00:10.327698 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dvcd\" (UniqueName: \"kubernetes.io/projected/df706708-e03c-4d6e-ac65-229a419d653f-kube-api-access-4dvcd\") pod \"keystone-bootstrap-zrdtq\" (UID: \"df706708-e03c-4d6e-ac65-229a419d653f\") " pod="openstack/keystone-bootstrap-zrdtq" Jan 30 07:00:10 crc kubenswrapper[4520]: I0130 07:00:10.327741 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df706708-e03c-4d6e-ac65-229a419d653f-fernet-keys\") pod \"keystone-bootstrap-zrdtq\" (UID: \"df706708-e03c-4d6e-ac65-229a419d653f\") " pod="openstack/keystone-bootstrap-zrdtq" Jan 30 07:00:10 crc kubenswrapper[4520]: I0130 07:00:10.327765 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df706708-e03c-4d6e-ac65-229a419d653f-credential-keys\") pod \"keystone-bootstrap-zrdtq\" (UID: \"df706708-e03c-4d6e-ac65-229a419d653f\") " pod="openstack/keystone-bootstrap-zrdtq" Jan 30 07:00:10 crc kubenswrapper[4520]: I0130 07:00:10.327970 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df706708-e03c-4d6e-ac65-229a419d653f-config-data\") pod \"keystone-bootstrap-zrdtq\" (UID: \"df706708-e03c-4d6e-ac65-229a419d653f\") " pod="openstack/keystone-bootstrap-zrdtq" Jan 30 07:00:10 crc kubenswrapper[4520]: I0130 07:00:10.328017 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df706708-e03c-4d6e-ac65-229a419d653f-scripts\") pod \"keystone-bootstrap-zrdtq\" (UID: \"df706708-e03c-4d6e-ac65-229a419d653f\") " pod="openstack/keystone-bootstrap-zrdtq" Jan 30 07:00:10 crc kubenswrapper[4520]: I0130 07:00:10.430729 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df706708-e03c-4d6e-ac65-229a419d653f-combined-ca-bundle\") pod \"keystone-bootstrap-zrdtq\" (UID: \"df706708-e03c-4d6e-ac65-229a419d653f\") " pod="openstack/keystone-bootstrap-zrdtq" Jan 30 07:00:10 crc kubenswrapper[4520]: I0130 07:00:10.431106 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dvcd\" (UniqueName: \"kubernetes.io/projected/df706708-e03c-4d6e-ac65-229a419d653f-kube-api-access-4dvcd\") pod \"keystone-bootstrap-zrdtq\" (UID: \"df706708-e03c-4d6e-ac65-229a419d653f\") " pod="openstack/keystone-bootstrap-zrdtq" Jan 30 07:00:10 crc kubenswrapper[4520]: I0130 07:00:10.431164 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df706708-e03c-4d6e-ac65-229a419d653f-fernet-keys\") pod \"keystone-bootstrap-zrdtq\" (UID: \"df706708-e03c-4d6e-ac65-229a419d653f\") " pod="openstack/keystone-bootstrap-zrdtq" Jan 30 07:00:10 crc kubenswrapper[4520]: I0130 07:00:10.431195 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df706708-e03c-4d6e-ac65-229a419d653f-credential-keys\") pod \"keystone-bootstrap-zrdtq\" (UID: \"df706708-e03c-4d6e-ac65-229a419d653f\") " pod="openstack/keystone-bootstrap-zrdtq" Jan 30 07:00:10 crc kubenswrapper[4520]: I0130 07:00:10.431308 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df706708-e03c-4d6e-ac65-229a419d653f-config-data\") pod \"keystone-bootstrap-zrdtq\" (UID: \"df706708-e03c-4d6e-ac65-229a419d653f\") " pod="openstack/keystone-bootstrap-zrdtq" Jan 30 07:00:10 crc kubenswrapper[4520]: I0130 07:00:10.431352 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df706708-e03c-4d6e-ac65-229a419d653f-scripts\") pod \"keystone-bootstrap-zrdtq\" (UID: \"df706708-e03c-4d6e-ac65-229a419d653f\") " pod="openstack/keystone-bootstrap-zrdtq" Jan 30 07:00:10 crc kubenswrapper[4520]: I0130 07:00:10.438635 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df706708-e03c-4d6e-ac65-229a419d653f-scripts\") pod \"keystone-bootstrap-zrdtq\" (UID: \"df706708-e03c-4d6e-ac65-229a419d653f\") " pod="openstack/keystone-bootstrap-zrdtq" Jan 30 07:00:10 crc kubenswrapper[4520]: I0130 07:00:10.441150 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df706708-e03c-4d6e-ac65-229a419d653f-credential-keys\") pod \"keystone-bootstrap-zrdtq\" (UID: \"df706708-e03c-4d6e-ac65-229a419d653f\") " pod="openstack/keystone-bootstrap-zrdtq" Jan 30 07:00:10 crc kubenswrapper[4520]: I0130 07:00:10.441457 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df706708-e03c-4d6e-ac65-229a419d653f-fernet-keys\") pod \"keystone-bootstrap-zrdtq\" (UID: \"df706708-e03c-4d6e-ac65-229a419d653f\") " pod="openstack/keystone-bootstrap-zrdtq" Jan 30 07:00:10 crc kubenswrapper[4520]: I0130 07:00:10.441778 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df706708-e03c-4d6e-ac65-229a419d653f-config-data\") pod \"keystone-bootstrap-zrdtq\" (UID: \"df706708-e03c-4d6e-ac65-229a419d653f\") " pod="openstack/keystone-bootstrap-zrdtq" Jan 30 07:00:10 crc kubenswrapper[4520]: I0130 07:00:10.449803 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df706708-e03c-4d6e-ac65-229a419d653f-combined-ca-bundle\") pod \"keystone-bootstrap-zrdtq\" (UID: \"df706708-e03c-4d6e-ac65-229a419d653f\") " pod="openstack/keystone-bootstrap-zrdtq" Jan 30 07:00:10 crc kubenswrapper[4520]: I0130 07:00:10.452014 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dvcd\" (UniqueName: \"kubernetes.io/projected/df706708-e03c-4d6e-ac65-229a419d653f-kube-api-access-4dvcd\") pod \"keystone-bootstrap-zrdtq\" (UID: \"df706708-e03c-4d6e-ac65-229a419d653f\") " pod="openstack/keystone-bootstrap-zrdtq" Jan 30 07:00:10 crc kubenswrapper[4520]: I0130 07:00:10.456214 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zrdtq" Jan 30 07:00:10 crc kubenswrapper[4520]: I0130 07:00:10.705554 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df6d9500-f0bf-4aff-a6d9-86fcdc982d6c" path="/var/lib/kubelet/pods/df6d9500-f0bf-4aff-a6d9-86fcdc982d6c/volumes" Jan 30 07:00:10 crc kubenswrapper[4520]: I0130 07:00:10.851726 4520 generic.go:334] "Generic (PLEG): container finished" podID="c46098fe-52c7-4a41-9a00-d156d5bfc4be" containerID="5be57067c7407f6aa6d3be338b06ad9bc6ef28560cd5e542dacb862e6d6dba31" exitCode=0 Jan 30 07:00:10 crc kubenswrapper[4520]: I0130 07:00:10.851769 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qgzqb" event={"ID":"c46098fe-52c7-4a41-9a00-d156d5bfc4be","Type":"ContainerDied","Data":"5be57067c7407f6aa6d3be338b06ad9bc6ef28560cd5e542dacb862e6d6dba31"} Jan 30 07:00:14 crc kubenswrapper[4520]: I0130 07:00:14.369881 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-bcc75fb87-pcx4j" podUID="20e16608-f957-4e8c-b9d2-63718bd0342e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: i/o timeout" Jan 30 07:00:16 crc kubenswrapper[4520]: E0130 07:00:16.931083 4520 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:b85d0548925081ae8c6bdd697658cec4" Jan 30 07:00:16 crc kubenswrapper[4520]: E0130 07:00:16.932634 4520 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:b85d0548925081ae8c6bdd697658cec4" Jan 30 07:00:16 crc kubenswrapper[4520]: E0130 07:00:16.932894 4520 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:b85d0548925081ae8c6bdd697658cec4,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n56dhf4hd7h56dh5dfh5bch57h686hc5h68fh558h68ch5c6h5f6h58bh96hdh66fhd6h58ch56dh65bh687h78h9fh57bh8fh68chd8h56bh5c9h97q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dtdvq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7ccf6f8c8c-g5kgh_openstack(a9ba792b-9c9d-4e3a-ae77-22c24f473037): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 07:00:16 crc kubenswrapper[4520]: E0130 07:00:16.935220 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:b85d0548925081ae8c6bdd697658cec4\\\"\"]" pod="openstack/horizon-7ccf6f8c8c-g5kgh" podUID="a9ba792b-9c9d-4e3a-ae77-22c24f473037" Jan 30 07:00:19 crc kubenswrapper[4520]: I0130 07:00:19.371855 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-bcc75fb87-pcx4j" podUID="20e16608-f957-4e8c-b9d2-63718bd0342e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: i/o timeout" Jan 30 07:00:21 crc kubenswrapper[4520]: E0130 07:00:21.037481 4520 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:b85d0548925081ae8c6bdd697658cec4" Jan 30 07:00:21 crc kubenswrapper[4520]: E0130 07:00:21.037852 4520 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:b85d0548925081ae8c6bdd697658cec4" Jan 30 07:00:21 crc kubenswrapper[4520]: E0130 07:00:21.038062 4520 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:b85d0548925081ae8c6bdd697658cec4,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n56h5cdh596h97h565h5c5h7h685h58h594h5fdh5bhc6hbh78h57dh5d9h568h6bh684h58ch5dfh5fh64dhfh66fh78h65h665h554h597h68bq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hrfc9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-68988b9b57-dgctl_openstack(018e4a09-2b6a-4f65-999c-01584f5d9972): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 07:00:21 crc kubenswrapper[4520]: E0130 07:00:21.040728 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:b85d0548925081ae8c6bdd697658cec4\\\"\"]" pod="openstack/horizon-68988b9b57-dgctl" podUID="018e4a09-2b6a-4f65-999c-01584f5d9972" Jan 30 07:00:24 crc kubenswrapper[4520]: I0130 07:00:24.372793 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-bcc75fb87-pcx4j" podUID="20e16608-f957-4e8c-b9d2-63718bd0342e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: i/o timeout" Jan 30 07:00:24 crc kubenswrapper[4520]: I0130 07:00:24.373776 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bcc75fb87-pcx4j" Jan 30 07:00:25 crc kubenswrapper[4520]: E0130 07:00:25.868477 4520 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:b85d0548925081ae8c6bdd697658cec4" Jan 30 07:00:25 crc kubenswrapper[4520]: E0130 07:00:25.869152 4520 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:b85d0548925081ae8c6bdd697658cec4" Jan 30 07:00:25 crc kubenswrapper[4520]: E0130 07:00:25.869351 4520 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:b85d0548925081ae8c6bdd697658cec4,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n65h56h59dh55ch5f9h686h66bhbfh554hddh59ch54dh66dh94h56ch9dh584hfh5b8h5dbh58chbh55dh665h67fh5cbh5d7h679h568h655h579hccq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n6mmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-759c7d779-ckntp_openstack(74b5dc84-a3f3-4bd1-8f9d-7165de599a6f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 07:00:25 crc kubenswrapper[4520]: E0130 07:00:25.882130 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:b85d0548925081ae8c6bdd697658cec4\\\"\"]" pod="openstack/horizon-759c7d779-ckntp" podUID="74b5dc84-a3f3-4bd1-8f9d-7165de599a6f" Jan 30 07:00:25 crc kubenswrapper[4520]: I0130 07:00:25.999164 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bcc75fb87-pcx4j" Jan 30 07:00:26 crc kubenswrapper[4520]: I0130 07:00:26.000711 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qgzqb" Jan 30 07:00:26 crc kubenswrapper[4520]: I0130 07:00:26.013991 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/20e16608-f957-4e8c-b9d2-63718bd0342e-dns-swift-storage-0\") pod \"20e16608-f957-4e8c-b9d2-63718bd0342e\" (UID: \"20e16608-f957-4e8c-b9d2-63718bd0342e\") " Jan 30 07:00:26 crc kubenswrapper[4520]: I0130 07:00:26.014089 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcbj4\" (UniqueName: \"kubernetes.io/projected/20e16608-f957-4e8c-b9d2-63718bd0342e-kube-api-access-wcbj4\") pod \"20e16608-f957-4e8c-b9d2-63718bd0342e\" (UID: \"20e16608-f957-4e8c-b9d2-63718bd0342e\") " Jan 30 07:00:26 crc kubenswrapper[4520]: I0130 07:00:26.014125 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c46098fe-52c7-4a41-9a00-d156d5bfc4be-combined-ca-bundle\") pod \"c46098fe-52c7-4a41-9a00-d156d5bfc4be\" (UID: \"c46098fe-52c7-4a41-9a00-d156d5bfc4be\") " Jan 30 07:00:26 crc kubenswrapper[4520]: I0130 07:00:26.014184 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20e16608-f957-4e8c-b9d2-63718bd0342e-config\") pod \"20e16608-f957-4e8c-b9d2-63718bd0342e\" (UID: \"20e16608-f957-4e8c-b9d2-63718bd0342e\") " Jan 30 07:00:26 crc kubenswrapper[4520]: I0130 07:00:26.014307 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20e16608-f957-4e8c-b9d2-63718bd0342e-ovsdbserver-nb\") pod \"20e16608-f957-4e8c-b9d2-63718bd0342e\" (UID: \"20e16608-f957-4e8c-b9d2-63718bd0342e\") " Jan 30 07:00:26 crc kubenswrapper[4520]: I0130 07:00:26.014356 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c46098fe-52c7-4a41-9a00-d156d5bfc4be-config\") pod \"c46098fe-52c7-4a41-9a00-d156d5bfc4be\" (UID: \"c46098fe-52c7-4a41-9a00-d156d5bfc4be\") " Jan 30 07:00:26 crc kubenswrapper[4520]: I0130 07:00:26.014457 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20e16608-f957-4e8c-b9d2-63718bd0342e-dns-svc\") pod \"20e16608-f957-4e8c-b9d2-63718bd0342e\" (UID: \"20e16608-f957-4e8c-b9d2-63718bd0342e\") " Jan 30 07:00:26 crc kubenswrapper[4520]: I0130 07:00:26.014481 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8bmg\" (UniqueName: \"kubernetes.io/projected/c46098fe-52c7-4a41-9a00-d156d5bfc4be-kube-api-access-c8bmg\") pod \"c46098fe-52c7-4a41-9a00-d156d5bfc4be\" (UID: \"c46098fe-52c7-4a41-9a00-d156d5bfc4be\") " Jan 30 07:00:26 crc kubenswrapper[4520]: I0130 07:00:26.014555 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20e16608-f957-4e8c-b9d2-63718bd0342e-ovsdbserver-sb\") pod \"20e16608-f957-4e8c-b9d2-63718bd0342e\" (UID: \"20e16608-f957-4e8c-b9d2-63718bd0342e\") " Jan 30 07:00:26 crc kubenswrapper[4520]: I0130 07:00:26.033817 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c46098fe-52c7-4a41-9a00-d156d5bfc4be-kube-api-access-c8bmg" (OuterVolumeSpecName: "kube-api-access-c8bmg") pod "c46098fe-52c7-4a41-9a00-d156d5bfc4be" (UID: "c46098fe-52c7-4a41-9a00-d156d5bfc4be"). InnerVolumeSpecName "kube-api-access-c8bmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:00:26 crc kubenswrapper[4520]: I0130 07:00:26.039845 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20e16608-f957-4e8c-b9d2-63718bd0342e-kube-api-access-wcbj4" (OuterVolumeSpecName: "kube-api-access-wcbj4") pod "20e16608-f957-4e8c-b9d2-63718bd0342e" (UID: "20e16608-f957-4e8c-b9d2-63718bd0342e"). InnerVolumeSpecName "kube-api-access-wcbj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:00:26 crc kubenswrapper[4520]: I0130 07:00:26.059883 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bcc75fb87-pcx4j" Jan 30 07:00:26 crc kubenswrapper[4520]: I0130 07:00:26.060250 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bcc75fb87-pcx4j" event={"ID":"20e16608-f957-4e8c-b9d2-63718bd0342e","Type":"ContainerDied","Data":"4aeb6a3ae6877dba6f66cd77b55982510470463686dc355b36c6361e74e63019"} Jan 30 07:00:26 crc kubenswrapper[4520]: I0130 07:00:26.062119 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qgzqb" Jan 30 07:00:26 crc kubenswrapper[4520]: I0130 07:00:26.062161 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qgzqb" event={"ID":"c46098fe-52c7-4a41-9a00-d156d5bfc4be","Type":"ContainerDied","Data":"7fe24fa56e9786da0c377b6f0d30798538c1f4d245eb45985ca2695195a3b537"} Jan 30 07:00:26 crc kubenswrapper[4520]: I0130 07:00:26.062206 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fe24fa56e9786da0c377b6f0d30798538c1f4d245eb45985ca2695195a3b537" Jan 30 07:00:26 crc kubenswrapper[4520]: I0130 07:00:26.083331 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c46098fe-52c7-4a41-9a00-d156d5bfc4be-config" (OuterVolumeSpecName: "config") pod "c46098fe-52c7-4a41-9a00-d156d5bfc4be" (UID: "c46098fe-52c7-4a41-9a00-d156d5bfc4be"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:00:26 crc kubenswrapper[4520]: I0130 07:00:26.105804 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20e16608-f957-4e8c-b9d2-63718bd0342e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "20e16608-f957-4e8c-b9d2-63718bd0342e" (UID: "20e16608-f957-4e8c-b9d2-63718bd0342e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:00:26 crc kubenswrapper[4520]: I0130 07:00:26.116891 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c46098fe-52c7-4a41-9a00-d156d5bfc4be-config\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:26 crc kubenswrapper[4520]: I0130 07:00:26.116920 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8bmg\" (UniqueName: \"kubernetes.io/projected/c46098fe-52c7-4a41-9a00-d156d5bfc4be-kube-api-access-c8bmg\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:26 crc kubenswrapper[4520]: I0130 07:00:26.116932 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcbj4\" (UniqueName: \"kubernetes.io/projected/20e16608-f957-4e8c-b9d2-63718bd0342e-kube-api-access-wcbj4\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:26 crc kubenswrapper[4520]: I0130 07:00:26.116940 4520 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20e16608-f957-4e8c-b9d2-63718bd0342e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:26 crc kubenswrapper[4520]: I0130 07:00:26.133807 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c46098fe-52c7-4a41-9a00-d156d5bfc4be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c46098fe-52c7-4a41-9a00-d156d5bfc4be" (UID: "c46098fe-52c7-4a41-9a00-d156d5bfc4be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:00:26 crc kubenswrapper[4520]: I0130 07:00:26.149110 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20e16608-f957-4e8c-b9d2-63718bd0342e-config" (OuterVolumeSpecName: "config") pod "20e16608-f957-4e8c-b9d2-63718bd0342e" (UID: "20e16608-f957-4e8c-b9d2-63718bd0342e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:00:26 crc kubenswrapper[4520]: I0130 07:00:26.153389 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20e16608-f957-4e8c-b9d2-63718bd0342e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "20e16608-f957-4e8c-b9d2-63718bd0342e" (UID: "20e16608-f957-4e8c-b9d2-63718bd0342e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:00:26 crc kubenswrapper[4520]: I0130 07:00:26.157795 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20e16608-f957-4e8c-b9d2-63718bd0342e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "20e16608-f957-4e8c-b9d2-63718bd0342e" (UID: "20e16608-f957-4e8c-b9d2-63718bd0342e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:00:26 crc kubenswrapper[4520]: I0130 07:00:26.164056 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20e16608-f957-4e8c-b9d2-63718bd0342e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "20e16608-f957-4e8c-b9d2-63718bd0342e" (UID: "20e16608-f957-4e8c-b9d2-63718bd0342e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:00:26 crc kubenswrapper[4520]: I0130 07:00:26.219870 4520 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/20e16608-f957-4e8c-b9d2-63718bd0342e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:26 crc kubenswrapper[4520]: I0130 07:00:26.220215 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c46098fe-52c7-4a41-9a00-d156d5bfc4be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:26 crc kubenswrapper[4520]: I0130 07:00:26.220265 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20e16608-f957-4e8c-b9d2-63718bd0342e-config\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:26 crc kubenswrapper[4520]: I0130 07:00:26.220278 4520 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20e16608-f957-4e8c-b9d2-63718bd0342e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:26 crc kubenswrapper[4520]: I0130 07:00:26.220290 4520 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20e16608-f957-4e8c-b9d2-63718bd0342e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:26 crc kubenswrapper[4520]: I0130 07:00:26.407099 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bcc75fb87-pcx4j"] Jan 30 07:00:26 crc kubenswrapper[4520]: I0130 07:00:26.414020 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bcc75fb87-pcx4j"] Jan 30 07:00:26 crc kubenswrapper[4520]: I0130 07:00:26.698415 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20e16608-f957-4e8c-b9d2-63718bd0342e" path="/var/lib/kubelet/pods/20e16608-f957-4e8c-b9d2-63718bd0342e/volumes" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.319267 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-549d55ddbc-cfmfx"] Jan 30 07:00:27 crc kubenswrapper[4520]: E0130 07:00:27.320890 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20e16608-f957-4e8c-b9d2-63718bd0342e" containerName="dnsmasq-dns" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.320912 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="20e16608-f957-4e8c-b9d2-63718bd0342e" containerName="dnsmasq-dns" Jan 30 07:00:27 crc kubenswrapper[4520]: E0130 07:00:27.320931 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c46098fe-52c7-4a41-9a00-d156d5bfc4be" containerName="neutron-db-sync" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.320937 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="c46098fe-52c7-4a41-9a00-d156d5bfc4be" containerName="neutron-db-sync" Jan 30 07:00:27 crc kubenswrapper[4520]: E0130 07:00:27.320969 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20e16608-f957-4e8c-b9d2-63718bd0342e" containerName="init" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.320975 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="20e16608-f957-4e8c-b9d2-63718bd0342e" containerName="init" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.321263 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="20e16608-f957-4e8c-b9d2-63718bd0342e" containerName="dnsmasq-dns" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.321283 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="c46098fe-52c7-4a41-9a00-d156d5bfc4be" containerName="neutron-db-sync" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.324377 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549d55ddbc-cfmfx" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.342092 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-549d55ddbc-cfmfx"] Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.350110 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7bb59b888-snb5k"] Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.351459 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7bb59b888-snb5k" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.354482 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.354691 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.356190 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7bb59b888-snb5k"] Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.356768 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-cjk6l" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.356806 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.501673 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2336abfe-2191-4b5f-92bd-2077f6051a52-ovsdbserver-sb\") pod \"dnsmasq-dns-549d55ddbc-cfmfx\" (UID: \"2336abfe-2191-4b5f-92bd-2077f6051a52\") " pod="openstack/dnsmasq-dns-549d55ddbc-cfmfx" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.501880 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plvxr\" (UniqueName: \"kubernetes.io/projected/2336abfe-2191-4b5f-92bd-2077f6051a52-kube-api-access-plvxr\") pod \"dnsmasq-dns-549d55ddbc-cfmfx\" (UID: \"2336abfe-2191-4b5f-92bd-2077f6051a52\") " pod="openstack/dnsmasq-dns-549d55ddbc-cfmfx" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.501947 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2336abfe-2191-4b5f-92bd-2077f6051a52-dns-swift-storage-0\") pod \"dnsmasq-dns-549d55ddbc-cfmfx\" (UID: \"2336abfe-2191-4b5f-92bd-2077f6051a52\") " pod="openstack/dnsmasq-dns-549d55ddbc-cfmfx" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.501990 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5csg7\" (UniqueName: \"kubernetes.io/projected/2cd8643f-309c-46b4-bc83-4a8548e98403-kube-api-access-5csg7\") pod \"neutron-7bb59b888-snb5k\" (UID: \"2cd8643f-309c-46b4-bc83-4a8548e98403\") " pod="openstack/neutron-7bb59b888-snb5k" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.502036 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2336abfe-2191-4b5f-92bd-2077f6051a52-config\") pod \"dnsmasq-dns-549d55ddbc-cfmfx\" (UID: \"2336abfe-2191-4b5f-92bd-2077f6051a52\") " pod="openstack/dnsmasq-dns-549d55ddbc-cfmfx" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.502136 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2cd8643f-309c-46b4-bc83-4a8548e98403-httpd-config\") pod \"neutron-7bb59b888-snb5k\" (UID: \"2cd8643f-309c-46b4-bc83-4a8548e98403\") " pod="openstack/neutron-7bb59b888-snb5k" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.502156 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2336abfe-2191-4b5f-92bd-2077f6051a52-dns-svc\") pod \"dnsmasq-dns-549d55ddbc-cfmfx\" (UID: \"2336abfe-2191-4b5f-92bd-2077f6051a52\") " pod="openstack/dnsmasq-dns-549d55ddbc-cfmfx" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.502310 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2cd8643f-309c-46b4-bc83-4a8548e98403-config\") pod \"neutron-7bb59b888-snb5k\" (UID: \"2cd8643f-309c-46b4-bc83-4a8548e98403\") " pod="openstack/neutron-7bb59b888-snb5k" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.502383 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2336abfe-2191-4b5f-92bd-2077f6051a52-ovsdbserver-nb\") pod \"dnsmasq-dns-549d55ddbc-cfmfx\" (UID: \"2336abfe-2191-4b5f-92bd-2077f6051a52\") " pod="openstack/dnsmasq-dns-549d55ddbc-cfmfx" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.502427 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cd8643f-309c-46b4-bc83-4a8548e98403-ovndb-tls-certs\") pod \"neutron-7bb59b888-snb5k\" (UID: \"2cd8643f-309c-46b4-bc83-4a8548e98403\") " pod="openstack/neutron-7bb59b888-snb5k" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.502456 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cd8643f-309c-46b4-bc83-4a8548e98403-combined-ca-bundle\") pod \"neutron-7bb59b888-snb5k\" (UID: \"2cd8643f-309c-46b4-bc83-4a8548e98403\") " pod="openstack/neutron-7bb59b888-snb5k" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.609612 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2336abfe-2191-4b5f-92bd-2077f6051a52-ovsdbserver-sb\") pod \"dnsmasq-dns-549d55ddbc-cfmfx\" (UID: \"2336abfe-2191-4b5f-92bd-2077f6051a52\") " pod="openstack/dnsmasq-dns-549d55ddbc-cfmfx" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.609694 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plvxr\" (UniqueName: \"kubernetes.io/projected/2336abfe-2191-4b5f-92bd-2077f6051a52-kube-api-access-plvxr\") pod \"dnsmasq-dns-549d55ddbc-cfmfx\" (UID: \"2336abfe-2191-4b5f-92bd-2077f6051a52\") " pod="openstack/dnsmasq-dns-549d55ddbc-cfmfx" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.609727 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2336abfe-2191-4b5f-92bd-2077f6051a52-dns-swift-storage-0\") pod \"dnsmasq-dns-549d55ddbc-cfmfx\" (UID: \"2336abfe-2191-4b5f-92bd-2077f6051a52\") " pod="openstack/dnsmasq-dns-549d55ddbc-cfmfx" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.609750 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5csg7\" (UniqueName: \"kubernetes.io/projected/2cd8643f-309c-46b4-bc83-4a8548e98403-kube-api-access-5csg7\") pod \"neutron-7bb59b888-snb5k\" (UID: \"2cd8643f-309c-46b4-bc83-4a8548e98403\") " pod="openstack/neutron-7bb59b888-snb5k" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.609771 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2336abfe-2191-4b5f-92bd-2077f6051a52-config\") pod \"dnsmasq-dns-549d55ddbc-cfmfx\" (UID: \"2336abfe-2191-4b5f-92bd-2077f6051a52\") " pod="openstack/dnsmasq-dns-549d55ddbc-cfmfx" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.609813 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2cd8643f-309c-46b4-bc83-4a8548e98403-httpd-config\") pod \"neutron-7bb59b888-snb5k\" (UID: \"2cd8643f-309c-46b4-bc83-4a8548e98403\") " pod="openstack/neutron-7bb59b888-snb5k" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.609827 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2336abfe-2191-4b5f-92bd-2077f6051a52-dns-svc\") pod \"dnsmasq-dns-549d55ddbc-cfmfx\" (UID: \"2336abfe-2191-4b5f-92bd-2077f6051a52\") " pod="openstack/dnsmasq-dns-549d55ddbc-cfmfx" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.609895 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2cd8643f-309c-46b4-bc83-4a8548e98403-config\") pod \"neutron-7bb59b888-snb5k\" (UID: \"2cd8643f-309c-46b4-bc83-4a8548e98403\") " pod="openstack/neutron-7bb59b888-snb5k" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.609938 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2336abfe-2191-4b5f-92bd-2077f6051a52-ovsdbserver-nb\") pod \"dnsmasq-dns-549d55ddbc-cfmfx\" (UID: \"2336abfe-2191-4b5f-92bd-2077f6051a52\") " pod="openstack/dnsmasq-dns-549d55ddbc-cfmfx" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.609959 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cd8643f-309c-46b4-bc83-4a8548e98403-ovndb-tls-certs\") pod \"neutron-7bb59b888-snb5k\" (UID: \"2cd8643f-309c-46b4-bc83-4a8548e98403\") " pod="openstack/neutron-7bb59b888-snb5k" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.609974 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cd8643f-309c-46b4-bc83-4a8548e98403-combined-ca-bundle\") pod \"neutron-7bb59b888-snb5k\" (UID: \"2cd8643f-309c-46b4-bc83-4a8548e98403\") " pod="openstack/neutron-7bb59b888-snb5k" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.611403 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2336abfe-2191-4b5f-92bd-2077f6051a52-config\") pod \"dnsmasq-dns-549d55ddbc-cfmfx\" (UID: \"2336abfe-2191-4b5f-92bd-2077f6051a52\") " pod="openstack/dnsmasq-dns-549d55ddbc-cfmfx" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.611911 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2336abfe-2191-4b5f-92bd-2077f6051a52-ovsdbserver-sb\") pod \"dnsmasq-dns-549d55ddbc-cfmfx\" (UID: \"2336abfe-2191-4b5f-92bd-2077f6051a52\") " pod="openstack/dnsmasq-dns-549d55ddbc-cfmfx" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.612616 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2336abfe-2191-4b5f-92bd-2077f6051a52-dns-swift-storage-0\") pod \"dnsmasq-dns-549d55ddbc-cfmfx\" (UID: \"2336abfe-2191-4b5f-92bd-2077f6051a52\") " pod="openstack/dnsmasq-dns-549d55ddbc-cfmfx" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.613610 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2336abfe-2191-4b5f-92bd-2077f6051a52-ovsdbserver-nb\") pod \"dnsmasq-dns-549d55ddbc-cfmfx\" (UID: \"2336abfe-2191-4b5f-92bd-2077f6051a52\") " pod="openstack/dnsmasq-dns-549d55ddbc-cfmfx" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.613763 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2336abfe-2191-4b5f-92bd-2077f6051a52-dns-svc\") pod \"dnsmasq-dns-549d55ddbc-cfmfx\" (UID: \"2336abfe-2191-4b5f-92bd-2077f6051a52\") " pod="openstack/dnsmasq-dns-549d55ddbc-cfmfx" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.624170 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cd8643f-309c-46b4-bc83-4a8548e98403-ovndb-tls-certs\") pod \"neutron-7bb59b888-snb5k\" (UID: \"2cd8643f-309c-46b4-bc83-4a8548e98403\") " pod="openstack/neutron-7bb59b888-snb5k" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.626114 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cd8643f-309c-46b4-bc83-4a8548e98403-combined-ca-bundle\") pod \"neutron-7bb59b888-snb5k\" (UID: \"2cd8643f-309c-46b4-bc83-4a8548e98403\") " pod="openstack/neutron-7bb59b888-snb5k" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.626203 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2cd8643f-309c-46b4-bc83-4a8548e98403-httpd-config\") pod \"neutron-7bb59b888-snb5k\" (UID: \"2cd8643f-309c-46b4-bc83-4a8548e98403\") " pod="openstack/neutron-7bb59b888-snb5k" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.631483 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5csg7\" (UniqueName: \"kubernetes.io/projected/2cd8643f-309c-46b4-bc83-4a8548e98403-kube-api-access-5csg7\") pod \"neutron-7bb59b888-snb5k\" (UID: \"2cd8643f-309c-46b4-bc83-4a8548e98403\") " pod="openstack/neutron-7bb59b888-snb5k" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.640389 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2cd8643f-309c-46b4-bc83-4a8548e98403-config\") pod \"neutron-7bb59b888-snb5k\" (UID: \"2cd8643f-309c-46b4-bc83-4a8548e98403\") " pod="openstack/neutron-7bb59b888-snb5k" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.641830 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plvxr\" (UniqueName: \"kubernetes.io/projected/2336abfe-2191-4b5f-92bd-2077f6051a52-kube-api-access-plvxr\") pod \"dnsmasq-dns-549d55ddbc-cfmfx\" (UID: \"2336abfe-2191-4b5f-92bd-2077f6051a52\") " pod="openstack/dnsmasq-dns-549d55ddbc-cfmfx" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.649982 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549d55ddbc-cfmfx" Jan 30 07:00:27 crc kubenswrapper[4520]: I0130 07:00:27.680762 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7bb59b888-snb5k" Jan 30 07:00:29 crc kubenswrapper[4520]: I0130 07:00:29.374406 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-bcc75fb87-pcx4j" podUID="20e16608-f957-4e8c-b9d2-63718bd0342e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: i/o timeout" Jan 30 07:00:29 crc kubenswrapper[4520]: I0130 07:00:29.440892 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7445dc46fc-s424z"] Jan 30 07:00:29 crc kubenswrapper[4520]: I0130 07:00:29.443598 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7445dc46fc-s424z" Jan 30 07:00:29 crc kubenswrapper[4520]: I0130 07:00:29.445845 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 30 07:00:29 crc kubenswrapper[4520]: I0130 07:00:29.446167 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 30 07:00:29 crc kubenswrapper[4520]: I0130 07:00:29.458818 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-ovndb-tls-certs\") pod \"neutron-7445dc46fc-s424z\" (UID: \"07aa3f61-cfcb-4aa2-8430-e4f800dbf572\") " pod="openstack/neutron-7445dc46fc-s424z" Jan 30 07:00:29 crc kubenswrapper[4520]: I0130 07:00:29.458947 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-httpd-config\") pod \"neutron-7445dc46fc-s424z\" (UID: \"07aa3f61-cfcb-4aa2-8430-e4f800dbf572\") " pod="openstack/neutron-7445dc46fc-s424z" Jan 30 07:00:29 crc kubenswrapper[4520]: I0130 07:00:29.459053 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-public-tls-certs\") pod \"neutron-7445dc46fc-s424z\" (UID: \"07aa3f61-cfcb-4aa2-8430-e4f800dbf572\") " pod="openstack/neutron-7445dc46fc-s424z" Jan 30 07:00:29 crc kubenswrapper[4520]: I0130 07:00:29.459280 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlthh\" (UniqueName: \"kubernetes.io/projected/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-kube-api-access-tlthh\") pod \"neutron-7445dc46fc-s424z\" (UID: \"07aa3f61-cfcb-4aa2-8430-e4f800dbf572\") " pod="openstack/neutron-7445dc46fc-s424z" Jan 30 07:00:29 crc kubenswrapper[4520]: I0130 07:00:29.459459 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-config\") pod \"neutron-7445dc46fc-s424z\" (UID: \"07aa3f61-cfcb-4aa2-8430-e4f800dbf572\") " pod="openstack/neutron-7445dc46fc-s424z" Jan 30 07:00:29 crc kubenswrapper[4520]: I0130 07:00:29.459911 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-internal-tls-certs\") pod \"neutron-7445dc46fc-s424z\" (UID: \"07aa3f61-cfcb-4aa2-8430-e4f800dbf572\") " pod="openstack/neutron-7445dc46fc-s424z" Jan 30 07:00:29 crc kubenswrapper[4520]: I0130 07:00:29.459995 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-combined-ca-bundle\") pod \"neutron-7445dc46fc-s424z\" (UID: \"07aa3f61-cfcb-4aa2-8430-e4f800dbf572\") " pod="openstack/neutron-7445dc46fc-s424z" Jan 30 07:00:29 crc kubenswrapper[4520]: I0130 07:00:29.468985 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7445dc46fc-s424z"] Jan 30 07:00:29 crc kubenswrapper[4520]: I0130 07:00:29.563255 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-public-tls-certs\") pod \"neutron-7445dc46fc-s424z\" (UID: \"07aa3f61-cfcb-4aa2-8430-e4f800dbf572\") " pod="openstack/neutron-7445dc46fc-s424z" Jan 30 07:00:29 crc kubenswrapper[4520]: I0130 07:00:29.563347 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlthh\" (UniqueName: \"kubernetes.io/projected/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-kube-api-access-tlthh\") pod \"neutron-7445dc46fc-s424z\" (UID: \"07aa3f61-cfcb-4aa2-8430-e4f800dbf572\") " pod="openstack/neutron-7445dc46fc-s424z" Jan 30 07:00:29 crc kubenswrapper[4520]: I0130 07:00:29.563409 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-config\") pod \"neutron-7445dc46fc-s424z\" (UID: \"07aa3f61-cfcb-4aa2-8430-e4f800dbf572\") " pod="openstack/neutron-7445dc46fc-s424z" Jan 30 07:00:29 crc kubenswrapper[4520]: I0130 07:00:29.563492 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-internal-tls-certs\") pod \"neutron-7445dc46fc-s424z\" (UID: \"07aa3f61-cfcb-4aa2-8430-e4f800dbf572\") " pod="openstack/neutron-7445dc46fc-s424z" Jan 30 07:00:29 crc kubenswrapper[4520]: I0130 07:00:29.563544 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-combined-ca-bundle\") pod \"neutron-7445dc46fc-s424z\" (UID: \"07aa3f61-cfcb-4aa2-8430-e4f800dbf572\") " pod="openstack/neutron-7445dc46fc-s424z" Jan 30 07:00:29 crc kubenswrapper[4520]: I0130 07:00:29.563712 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-ovndb-tls-certs\") pod \"neutron-7445dc46fc-s424z\" (UID: \"07aa3f61-cfcb-4aa2-8430-e4f800dbf572\") " pod="openstack/neutron-7445dc46fc-s424z" Jan 30 07:00:29 crc kubenswrapper[4520]: I0130 07:00:29.563764 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-httpd-config\") pod \"neutron-7445dc46fc-s424z\" (UID: \"07aa3f61-cfcb-4aa2-8430-e4f800dbf572\") " pod="openstack/neutron-7445dc46fc-s424z" Jan 30 07:00:29 crc kubenswrapper[4520]: I0130 07:00:29.574764 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-internal-tls-certs\") pod \"neutron-7445dc46fc-s424z\" (UID: \"07aa3f61-cfcb-4aa2-8430-e4f800dbf572\") " pod="openstack/neutron-7445dc46fc-s424z" Jan 30 07:00:29 crc kubenswrapper[4520]: I0130 07:00:29.575616 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-public-tls-certs\") pod \"neutron-7445dc46fc-s424z\" (UID: \"07aa3f61-cfcb-4aa2-8430-e4f800dbf572\") " pod="openstack/neutron-7445dc46fc-s424z" Jan 30 07:00:29 crc kubenswrapper[4520]: I0130 07:00:29.587572 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-combined-ca-bundle\") pod \"neutron-7445dc46fc-s424z\" (UID: \"07aa3f61-cfcb-4aa2-8430-e4f800dbf572\") " pod="openstack/neutron-7445dc46fc-s424z" Jan 30 07:00:29 crc kubenswrapper[4520]: I0130 07:00:29.587677 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-ovndb-tls-certs\") pod \"neutron-7445dc46fc-s424z\" (UID: \"07aa3f61-cfcb-4aa2-8430-e4f800dbf572\") " pod="openstack/neutron-7445dc46fc-s424z" Jan 30 07:00:29 crc kubenswrapper[4520]: I0130 07:00:29.589810 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-config\") pod \"neutron-7445dc46fc-s424z\" (UID: \"07aa3f61-cfcb-4aa2-8430-e4f800dbf572\") " pod="openstack/neutron-7445dc46fc-s424z" Jan 30 07:00:29 crc kubenswrapper[4520]: I0130 07:00:29.591226 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-httpd-config\") pod \"neutron-7445dc46fc-s424z\" (UID: \"07aa3f61-cfcb-4aa2-8430-e4f800dbf572\") " pod="openstack/neutron-7445dc46fc-s424z" Jan 30 07:00:29 crc kubenswrapper[4520]: I0130 07:00:29.600076 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlthh\" (UniqueName: \"kubernetes.io/projected/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-kube-api-access-tlthh\") pod \"neutron-7445dc46fc-s424z\" (UID: \"07aa3f61-cfcb-4aa2-8430-e4f800dbf572\") " pod="openstack/neutron-7445dc46fc-s424z" Jan 30 07:00:29 crc kubenswrapper[4520]: I0130 07:00:29.777401 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7445dc46fc-s424z" Jan 30 07:00:34 crc kubenswrapper[4520]: I0130 07:00:34.700614 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7ccf6f8c8c-g5kgh" Jan 30 07:00:34 crc kubenswrapper[4520]: I0130 07:00:34.884559 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9ba792b-9c9d-4e3a-ae77-22c24f473037-scripts\") pod \"a9ba792b-9c9d-4e3a-ae77-22c24f473037\" (UID: \"a9ba792b-9c9d-4e3a-ae77-22c24f473037\") " Jan 30 07:00:34 crc kubenswrapper[4520]: I0130 07:00:34.884629 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a9ba792b-9c9d-4e3a-ae77-22c24f473037-horizon-secret-key\") pod \"a9ba792b-9c9d-4e3a-ae77-22c24f473037\" (UID: \"a9ba792b-9c9d-4e3a-ae77-22c24f473037\") " Jan 30 07:00:34 crc kubenswrapper[4520]: I0130 07:00:34.884864 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtdvq\" (UniqueName: \"kubernetes.io/projected/a9ba792b-9c9d-4e3a-ae77-22c24f473037-kube-api-access-dtdvq\") pod \"a9ba792b-9c9d-4e3a-ae77-22c24f473037\" (UID: \"a9ba792b-9c9d-4e3a-ae77-22c24f473037\") " Jan 30 07:00:34 crc kubenswrapper[4520]: I0130 07:00:34.884902 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9ba792b-9c9d-4e3a-ae77-22c24f473037-config-data\") pod \"a9ba792b-9c9d-4e3a-ae77-22c24f473037\" (UID: \"a9ba792b-9c9d-4e3a-ae77-22c24f473037\") " Jan 30 07:00:34 crc kubenswrapper[4520]: I0130 07:00:34.884972 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9ba792b-9c9d-4e3a-ae77-22c24f473037-logs\") pod \"a9ba792b-9c9d-4e3a-ae77-22c24f473037\" (UID: \"a9ba792b-9c9d-4e3a-ae77-22c24f473037\") " Jan 30 07:00:34 crc kubenswrapper[4520]: I0130 07:00:34.885765 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9ba792b-9c9d-4e3a-ae77-22c24f473037-logs" (OuterVolumeSpecName: "logs") pod "a9ba792b-9c9d-4e3a-ae77-22c24f473037" (UID: "a9ba792b-9c9d-4e3a-ae77-22c24f473037"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:00:34 crc kubenswrapper[4520]: I0130 07:00:34.885787 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9ba792b-9c9d-4e3a-ae77-22c24f473037-scripts" (OuterVolumeSpecName: "scripts") pod "a9ba792b-9c9d-4e3a-ae77-22c24f473037" (UID: "a9ba792b-9c9d-4e3a-ae77-22c24f473037"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:00:34 crc kubenswrapper[4520]: I0130 07:00:34.886033 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9ba792b-9c9d-4e3a-ae77-22c24f473037-config-data" (OuterVolumeSpecName: "config-data") pod "a9ba792b-9c9d-4e3a-ae77-22c24f473037" (UID: "a9ba792b-9c9d-4e3a-ae77-22c24f473037"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:00:34 crc kubenswrapper[4520]: I0130 07:00:34.892148 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9ba792b-9c9d-4e3a-ae77-22c24f473037-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "a9ba792b-9c9d-4e3a-ae77-22c24f473037" (UID: "a9ba792b-9c9d-4e3a-ae77-22c24f473037"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:00:34 crc kubenswrapper[4520]: E0130 07:00:34.892402 4520 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-heat-engine:b85d0548925081ae8c6bdd697658cec4" Jan 30 07:00:34 crc kubenswrapper[4520]: E0130 07:00:34.892507 4520 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-heat-engine:b85d0548925081ae8c6bdd697658cec4" Jan 30 07:00:34 crc kubenswrapper[4520]: E0130 07:00:34.892991 4520 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-heat-engine:b85d0548925081ae8c6bdd697658cec4,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c7zpp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qndsg_openstack(1771d5c5-4904-435a-81ac-80eaaf23bc68): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 07:00:34 crc kubenswrapper[4520]: I0130 07:00:34.893063 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9ba792b-9c9d-4e3a-ae77-22c24f473037-kube-api-access-dtdvq" (OuterVolumeSpecName: "kube-api-access-dtdvq") pod "a9ba792b-9c9d-4e3a-ae77-22c24f473037" (UID: "a9ba792b-9c9d-4e3a-ae77-22c24f473037"). InnerVolumeSpecName "kube-api-access-dtdvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:00:34 crc kubenswrapper[4520]: E0130 07:00:34.895172 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-qndsg" podUID="1771d5c5-4904-435a-81ac-80eaaf23bc68" Jan 30 07:00:34 crc kubenswrapper[4520]: I0130 07:00:34.968746 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68988b9b57-dgctl" Jan 30 07:00:34 crc kubenswrapper[4520]: I0130 07:00:34.994401 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtdvq\" (UniqueName: \"kubernetes.io/projected/a9ba792b-9c9d-4e3a-ae77-22c24f473037-kube-api-access-dtdvq\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:34 crc kubenswrapper[4520]: I0130 07:00:34.994464 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9ba792b-9c9d-4e3a-ae77-22c24f473037-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:34 crc kubenswrapper[4520]: I0130 07:00:34.994477 4520 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9ba792b-9c9d-4e3a-ae77-22c24f473037-logs\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:34 crc kubenswrapper[4520]: I0130 07:00:34.994496 4520 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9ba792b-9c9d-4e3a-ae77-22c24f473037-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:34 crc kubenswrapper[4520]: I0130 07:00:34.994507 4520 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a9ba792b-9c9d-4e3a-ae77-22c24f473037-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.095365 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/018e4a09-2b6a-4f65-999c-01584f5d9972-horizon-secret-key\") pod \"018e4a09-2b6a-4f65-999c-01584f5d9972\" (UID: \"018e4a09-2b6a-4f65-999c-01584f5d9972\") " Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.095460 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrfc9\" (UniqueName: \"kubernetes.io/projected/018e4a09-2b6a-4f65-999c-01584f5d9972-kube-api-access-hrfc9\") pod \"018e4a09-2b6a-4f65-999c-01584f5d9972\" (UID: \"018e4a09-2b6a-4f65-999c-01584f5d9972\") " Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.095638 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/018e4a09-2b6a-4f65-999c-01584f5d9972-logs\") pod \"018e4a09-2b6a-4f65-999c-01584f5d9972\" (UID: \"018e4a09-2b6a-4f65-999c-01584f5d9972\") " Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.095725 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/018e4a09-2b6a-4f65-999c-01584f5d9972-scripts\") pod \"018e4a09-2b6a-4f65-999c-01584f5d9972\" (UID: \"018e4a09-2b6a-4f65-999c-01584f5d9972\") " Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.095778 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/018e4a09-2b6a-4f65-999c-01584f5d9972-config-data\") pod \"018e4a09-2b6a-4f65-999c-01584f5d9972\" (UID: \"018e4a09-2b6a-4f65-999c-01584f5d9972\") " Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.096329 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/018e4a09-2b6a-4f65-999c-01584f5d9972-logs" (OuterVolumeSpecName: "logs") pod "018e4a09-2b6a-4f65-999c-01584f5d9972" (UID: "018e4a09-2b6a-4f65-999c-01584f5d9972"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.096497 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/018e4a09-2b6a-4f65-999c-01584f5d9972-scripts" (OuterVolumeSpecName: "scripts") pod "018e4a09-2b6a-4f65-999c-01584f5d9972" (UID: "018e4a09-2b6a-4f65-999c-01584f5d9972"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.096632 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/018e4a09-2b6a-4f65-999c-01584f5d9972-config-data" (OuterVolumeSpecName: "config-data") pod "018e4a09-2b6a-4f65-999c-01584f5d9972" (UID: "018e4a09-2b6a-4f65-999c-01584f5d9972"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.097113 4520 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/018e4a09-2b6a-4f65-999c-01584f5d9972-logs\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.097139 4520 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/018e4a09-2b6a-4f65-999c-01584f5d9972-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.097152 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/018e4a09-2b6a-4f65-999c-01584f5d9972-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.102220 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/018e4a09-2b6a-4f65-999c-01584f5d9972-kube-api-access-hrfc9" (OuterVolumeSpecName: "kube-api-access-hrfc9") pod "018e4a09-2b6a-4f65-999c-01584f5d9972" (UID: "018e4a09-2b6a-4f65-999c-01584f5d9972"). InnerVolumeSpecName "kube-api-access-hrfc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.102286 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/018e4a09-2b6a-4f65-999c-01584f5d9972-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "018e4a09-2b6a-4f65-999c-01584f5d9972" (UID: "018e4a09-2b6a-4f65-999c-01584f5d9972"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.178362 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68988b9b57-dgctl" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.178389 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68988b9b57-dgctl" event={"ID":"018e4a09-2b6a-4f65-999c-01584f5d9972","Type":"ContainerDied","Data":"daae3841408735311ff0308cbbc10aa51e642639ba9d81256030f983c502819a"} Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.185538 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7ccf6f8c8c-g5kgh" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.185568 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7ccf6f8c8c-g5kgh" event={"ID":"a9ba792b-9c9d-4e3a-ae77-22c24f473037","Type":"ContainerDied","Data":"0efd3b36c42c7b8c01657e9c6f218aa697500f5511ed2a4ec1acbb6522467e0c"} Jan 30 07:00:35 crc kubenswrapper[4520]: E0130 07:00:35.189843 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-heat-engine:b85d0548925081ae8c6bdd697658cec4\\\"\"" pod="openstack/heat-db-sync-qndsg" podUID="1771d5c5-4904-435a-81ac-80eaaf23bc68" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.199156 4520 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/018e4a09-2b6a-4f65-999c-01584f5d9972-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.199193 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrfc9\" (UniqueName: \"kubernetes.io/projected/018e4a09-2b6a-4f65-999c-01584f5d9972-kube-api-access-hrfc9\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.255631 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-68988b9b57-dgctl"] Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.275560 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-68988b9b57-dgctl"] Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.288292 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7ccf6f8c8c-g5kgh"] Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.294483 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7ccf6f8c8c-g5kgh"] Jan 30 07:00:35 crc kubenswrapper[4520]: E0130 07:00:35.433081 4520 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-barbican-api:b85d0548925081ae8c6bdd697658cec4" Jan 30 07:00:35 crc kubenswrapper[4520]: E0130 07:00:35.433133 4520 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-barbican-api:b85d0548925081ae8c6bdd697658cec4" Jan 30 07:00:35 crc kubenswrapper[4520]: E0130 07:00:35.433250 4520 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-barbican-api:b85d0548925081ae8c6bdd697658cec4,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4z8r7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-ld8j2_openstack(77b507ad-cda3-49b8-9a29-4c10ce6c1ac4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 07:00:35 crc kubenswrapper[4520]: E0130 07:00:35.434351 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-ld8j2" podUID="77b507ad-cda3-49b8-9a29-4c10ce6c1ac4" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.461922 4520 scope.go:117] "RemoveContainer" containerID="81a249d44778c19f37d9165922002adf1b05703dbf440b6a295804d604915b89" Jan 30 07:00:35 crc kubenswrapper[4520]: E0130 07:00:35.462792 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81a249d44778c19f37d9165922002adf1b05703dbf440b6a295804d604915b89\": container with ID starting with 81a249d44778c19f37d9165922002adf1b05703dbf440b6a295804d604915b89 not found: ID does not exist" containerID="81a249d44778c19f37d9165922002adf1b05703dbf440b6a295804d604915b89" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.462947 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81a249d44778c19f37d9165922002adf1b05703dbf440b6a295804d604915b89"} err="failed to get container status \"81a249d44778c19f37d9165922002adf1b05703dbf440b6a295804d604915b89\": rpc error: code = NotFound desc = could not find container \"81a249d44778c19f37d9165922002adf1b05703dbf440b6a295804d604915b89\": container with ID starting with 81a249d44778c19f37d9165922002adf1b05703dbf440b6a295804d604915b89 not found: ID does not exist" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.463076 4520 scope.go:117] "RemoveContainer" containerID="bc44fa6146fb1b0b83ace31b3bd7ce0d6fa0b9d2fab148dd498f7e76c49d41bc" Jan 30 07:00:35 crc kubenswrapper[4520]: E0130 07:00:35.463942 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc44fa6146fb1b0b83ace31b3bd7ce0d6fa0b9d2fab148dd498f7e76c49d41bc\": container with ID starting with bc44fa6146fb1b0b83ace31b3bd7ce0d6fa0b9d2fab148dd498f7e76c49d41bc not found: ID does not exist" containerID="bc44fa6146fb1b0b83ace31b3bd7ce0d6fa0b9d2fab148dd498f7e76c49d41bc" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.463989 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc44fa6146fb1b0b83ace31b3bd7ce0d6fa0b9d2fab148dd498f7e76c49d41bc"} err="failed to get container status \"bc44fa6146fb1b0b83ace31b3bd7ce0d6fa0b9d2fab148dd498f7e76c49d41bc\": rpc error: code = NotFound desc = could not find container \"bc44fa6146fb1b0b83ace31b3bd7ce0d6fa0b9d2fab148dd498f7e76c49d41bc\": container with ID starting with bc44fa6146fb1b0b83ace31b3bd7ce0d6fa0b9d2fab148dd498f7e76c49d41bc not found: ID does not exist" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.464020 4520 scope.go:117] "RemoveContainer" containerID="81a249d44778c19f37d9165922002adf1b05703dbf440b6a295804d604915b89" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.464589 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81a249d44778c19f37d9165922002adf1b05703dbf440b6a295804d604915b89"} err="failed to get container status \"81a249d44778c19f37d9165922002adf1b05703dbf440b6a295804d604915b89\": rpc error: code = NotFound desc = could not find container \"81a249d44778c19f37d9165922002adf1b05703dbf440b6a295804d604915b89\": container with ID starting with 81a249d44778c19f37d9165922002adf1b05703dbf440b6a295804d604915b89 not found: ID does not exist" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.464635 4520 scope.go:117] "RemoveContainer" containerID="bc44fa6146fb1b0b83ace31b3bd7ce0d6fa0b9d2fab148dd498f7e76c49d41bc" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.465213 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc44fa6146fb1b0b83ace31b3bd7ce0d6fa0b9d2fab148dd498f7e76c49d41bc"} err="failed to get container status \"bc44fa6146fb1b0b83ace31b3bd7ce0d6fa0b9d2fab148dd498f7e76c49d41bc\": rpc error: code = NotFound desc = could not find container \"bc44fa6146fb1b0b83ace31b3bd7ce0d6fa0b9d2fab148dd498f7e76c49d41bc\": container with ID starting with bc44fa6146fb1b0b83ace31b3bd7ce0d6fa0b9d2fab148dd498f7e76c49d41bc not found: ID does not exist" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.465244 4520 scope.go:117] "RemoveContainer" containerID="ce05a23d78b97a2b22eb56697a31bdacb3a51060391208afe0914e8aec8db6f5" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.477493 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-759c7d779-ckntp" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.607580 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74b5dc84-a3f3-4bd1-8f9d-7165de599a6f-scripts\") pod \"74b5dc84-a3f3-4bd1-8f9d-7165de599a6f\" (UID: \"74b5dc84-a3f3-4bd1-8f9d-7165de599a6f\") " Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.607659 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74b5dc84-a3f3-4bd1-8f9d-7165de599a6f-config-data\") pod \"74b5dc84-a3f3-4bd1-8f9d-7165de599a6f\" (UID: \"74b5dc84-a3f3-4bd1-8f9d-7165de599a6f\") " Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.607900 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74b5dc84-a3f3-4bd1-8f9d-7165de599a6f-horizon-secret-key\") pod \"74b5dc84-a3f3-4bd1-8f9d-7165de599a6f\" (UID: \"74b5dc84-a3f3-4bd1-8f9d-7165de599a6f\") " Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.607957 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6mmr\" (UniqueName: \"kubernetes.io/projected/74b5dc84-a3f3-4bd1-8f9d-7165de599a6f-kube-api-access-n6mmr\") pod \"74b5dc84-a3f3-4bd1-8f9d-7165de599a6f\" (UID: \"74b5dc84-a3f3-4bd1-8f9d-7165de599a6f\") " Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.608565 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74b5dc84-a3f3-4bd1-8f9d-7165de599a6f-logs\") pod \"74b5dc84-a3f3-4bd1-8f9d-7165de599a6f\" (UID: \"74b5dc84-a3f3-4bd1-8f9d-7165de599a6f\") " Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.608601 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74b5dc84-a3f3-4bd1-8f9d-7165de599a6f-scripts" (OuterVolumeSpecName: "scripts") pod "74b5dc84-a3f3-4bd1-8f9d-7165de599a6f" (UID: "74b5dc84-a3f3-4bd1-8f9d-7165de599a6f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.608338 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74b5dc84-a3f3-4bd1-8f9d-7165de599a6f-config-data" (OuterVolumeSpecName: "config-data") pod "74b5dc84-a3f3-4bd1-8f9d-7165de599a6f" (UID: "74b5dc84-a3f3-4bd1-8f9d-7165de599a6f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.608932 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74b5dc84-a3f3-4bd1-8f9d-7165de599a6f-logs" (OuterVolumeSpecName: "logs") pod "74b5dc84-a3f3-4bd1-8f9d-7165de599a6f" (UID: "74b5dc84-a3f3-4bd1-8f9d-7165de599a6f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.610378 4520 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74b5dc84-a3f3-4bd1-8f9d-7165de599a6f-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.610402 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74b5dc84-a3f3-4bd1-8f9d-7165de599a6f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.610415 4520 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74b5dc84-a3f3-4bd1-8f9d-7165de599a6f-logs\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.611661 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74b5dc84-a3f3-4bd1-8f9d-7165de599a6f-kube-api-access-n6mmr" (OuterVolumeSpecName: "kube-api-access-n6mmr") pod "74b5dc84-a3f3-4bd1-8f9d-7165de599a6f" (UID: "74b5dc84-a3f3-4bd1-8f9d-7165de599a6f"). InnerVolumeSpecName "kube-api-access-n6mmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.611770 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74b5dc84-a3f3-4bd1-8f9d-7165de599a6f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "74b5dc84-a3f3-4bd1-8f9d-7165de599a6f" (UID: "74b5dc84-a3f3-4bd1-8f9d-7165de599a6f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.712628 4520 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74b5dc84-a3f3-4bd1-8f9d-7165de599a6f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.712668 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6mmr\" (UniqueName: \"kubernetes.io/projected/74b5dc84-a3f3-4bd1-8f9d-7165de599a6f-kube-api-access-n6mmr\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.754958 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rdh5m"] Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.757008 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rdh5m" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.782547 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rdh5m"] Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.825702 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4n86\" (UniqueName: \"kubernetes.io/projected/390f8ec2-a783-45b8-a1c8-984400c11237-kube-api-access-g4n86\") pod \"community-operators-rdh5m\" (UID: \"390f8ec2-a783-45b8-a1c8-984400c11237\") " pod="openshift-marketplace/community-operators-rdh5m" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.825886 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/390f8ec2-a783-45b8-a1c8-984400c11237-utilities\") pod \"community-operators-rdh5m\" (UID: \"390f8ec2-a783-45b8-a1c8-984400c11237\") " pod="openshift-marketplace/community-operators-rdh5m" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.827238 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/390f8ec2-a783-45b8-a1c8-984400c11237-catalog-content\") pod \"community-operators-rdh5m\" (UID: \"390f8ec2-a783-45b8-a1c8-984400c11237\") " pod="openshift-marketplace/community-operators-rdh5m" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.929244 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/390f8ec2-a783-45b8-a1c8-984400c11237-catalog-content\") pod \"community-operators-rdh5m\" (UID: \"390f8ec2-a783-45b8-a1c8-984400c11237\") " pod="openshift-marketplace/community-operators-rdh5m" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.929420 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4n86\" (UniqueName: \"kubernetes.io/projected/390f8ec2-a783-45b8-a1c8-984400c11237-kube-api-access-g4n86\") pod \"community-operators-rdh5m\" (UID: \"390f8ec2-a783-45b8-a1c8-984400c11237\") " pod="openshift-marketplace/community-operators-rdh5m" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.929454 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/390f8ec2-a783-45b8-a1c8-984400c11237-utilities\") pod \"community-operators-rdh5m\" (UID: \"390f8ec2-a783-45b8-a1c8-984400c11237\") " pod="openshift-marketplace/community-operators-rdh5m" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.929770 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/390f8ec2-a783-45b8-a1c8-984400c11237-catalog-content\") pod \"community-operators-rdh5m\" (UID: \"390f8ec2-a783-45b8-a1c8-984400c11237\") " pod="openshift-marketplace/community-operators-rdh5m" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.930062 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/390f8ec2-a783-45b8-a1c8-984400c11237-utilities\") pod \"community-operators-rdh5m\" (UID: \"390f8ec2-a783-45b8-a1c8-984400c11237\") " pod="openshift-marketplace/community-operators-rdh5m" Jan 30 07:00:35 crc kubenswrapper[4520]: I0130 07:00:35.957281 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4n86\" (UniqueName: \"kubernetes.io/projected/390f8ec2-a783-45b8-a1c8-984400c11237-kube-api-access-g4n86\") pod \"community-operators-rdh5m\" (UID: \"390f8ec2-a783-45b8-a1c8-984400c11237\") " pod="openshift-marketplace/community-operators-rdh5m" Jan 30 07:00:36 crc kubenswrapper[4520]: I0130 07:00:36.078432 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rdh5m" Jan 30 07:00:36 crc kubenswrapper[4520]: I0130 07:00:36.195924 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-759c7d779-ckntp" Jan 30 07:00:36 crc kubenswrapper[4520]: I0130 07:00:36.195920 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-759c7d779-ckntp" event={"ID":"74b5dc84-a3f3-4bd1-8f9d-7165de599a6f","Type":"ContainerDied","Data":"d5639323e21232fff9fa77206d3c2b6c26228e5f46d3830915f0baf8700b6b6d"} Jan 30 07:00:36 crc kubenswrapper[4520]: E0130 07:00:36.200354 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-barbican-api:b85d0548925081ae8c6bdd697658cec4\\\"\"" pod="openstack/barbican-db-sync-ld8j2" podUID="77b507ad-cda3-49b8-9a29-4c10ce6c1ac4" Jan 30 07:00:36 crc kubenswrapper[4520]: I0130 07:00:36.296866 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-759c7d779-ckntp"] Jan 30 07:00:36 crc kubenswrapper[4520]: I0130 07:00:36.313610 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-759c7d779-ckntp"] Jan 30 07:00:36 crc kubenswrapper[4520]: I0130 07:00:36.716874 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="018e4a09-2b6a-4f65-999c-01584f5d9972" path="/var/lib/kubelet/pods/018e4a09-2b6a-4f65-999c-01584f5d9972/volumes" Jan 30 07:00:36 crc kubenswrapper[4520]: I0130 07:00:36.717388 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74b5dc84-a3f3-4bd1-8f9d-7165de599a6f" path="/var/lib/kubelet/pods/74b5dc84-a3f3-4bd1-8f9d-7165de599a6f/volumes" Jan 30 07:00:36 crc kubenswrapper[4520]: I0130 07:00:36.717898 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9ba792b-9c9d-4e3a-ae77-22c24f473037" path="/var/lib/kubelet/pods/a9ba792b-9c9d-4e3a-ae77-22c24f473037/volumes" Jan 30 07:00:37 crc kubenswrapper[4520]: E0130 07:00:37.844389 4520 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:b85d0548925081ae8c6bdd697658cec4" Jan 30 07:00:37 crc kubenswrapper[4520]: E0130 07:00:37.844780 4520 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:b85d0548925081ae8c6bdd697658cec4" Jan 30 07:00:37 crc kubenswrapper[4520]: E0130 07:00:37.846323 4520 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:b85d0548925081ae8c6bdd697658cec4,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mfrzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-xgsxk_openstack(fc2063bc-3a1e-4e9f-badc-299e256a2f3c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 07:00:37 crc kubenswrapper[4520]: E0130 07:00:37.847576 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-xgsxk" podUID="fc2063bc-3a1e-4e9f-badc-299e256a2f3c" Jan 30 07:00:37 crc kubenswrapper[4520]: I0130 07:00:37.954773 4520 scope.go:117] "RemoveContainer" containerID="e88ba02bcc74ae1575e631bf9974f9e087e46d874e90376a98d1259f1ac2672d" Jan 30 07:00:38 crc kubenswrapper[4520]: E0130 07:00:38.238195 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:b85d0548925081ae8c6bdd697658cec4\\\"\"" pod="openstack/cinder-db-sync-xgsxk" podUID="fc2063bc-3a1e-4e9f-badc-299e256a2f3c" Jan 30 07:00:38 crc kubenswrapper[4520]: I0130 07:00:38.360314 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-d9dd85bbd-2g75n"] Jan 30 07:00:38 crc kubenswrapper[4520]: I0130 07:00:38.468016 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 07:00:38 crc kubenswrapper[4520]: I0130 07:00:38.527969 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-549d55ddbc-cfmfx"] Jan 30 07:00:38 crc kubenswrapper[4520]: I0130 07:00:38.551026 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-c459697cb-g922m"] Jan 30 07:00:38 crc kubenswrapper[4520]: I0130 07:00:38.744453 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zrdtq"] Jan 30 07:00:38 crc kubenswrapper[4520]: W0130 07:00:38.760093 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf706708_e03c_4d6e_ac65_229a419d653f.slice/crio-b8a2bcf19a893bed1ad6f2f72c7e82effd9aeaf462c80449133d328596ed3142 WatchSource:0}: Error finding container b8a2bcf19a893bed1ad6f2f72c7e82effd9aeaf462c80449133d328596ed3142: Status 404 returned error can't find the container with id b8a2bcf19a893bed1ad6f2f72c7e82effd9aeaf462c80449133d328596ed3142 Jan 30 07:00:38 crc kubenswrapper[4520]: I0130 07:00:38.771300 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rdh5m"] Jan 30 07:00:38 crc kubenswrapper[4520]: I0130 07:00:38.790482 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7bb59b888-snb5k"] Jan 30 07:00:38 crc kubenswrapper[4520]: I0130 07:00:38.909101 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7445dc46fc-s424z"] Jan 30 07:00:39 crc kubenswrapper[4520]: I0130 07:00:39.251681 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"68675c3f-bc31-4c90-9cfc-a0cfb0e05046","Type":"ContainerStarted","Data":"037cc10c3c2e2f4a8bdc01ea4c4e8da395f589c15db2bcf7025eda33728b2aa5"} Jan 30 07:00:39 crc kubenswrapper[4520]: I0130 07:00:39.252387 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="68675c3f-bc31-4c90-9cfc-a0cfb0e05046" containerName="glance-log" containerID="cri-o://87080c8365ec46d2b7b53e548a25ffea9342cea05d1f9fef2affda0c0a73c9a8" gracePeriod=30 Jan 30 07:00:39 crc kubenswrapper[4520]: I0130 07:00:39.252844 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="68675c3f-bc31-4c90-9cfc-a0cfb0e05046" containerName="glance-httpd" containerID="cri-o://037cc10c3c2e2f4a8bdc01ea4c4e8da395f589c15db2bcf7025eda33728b2aa5" gracePeriod=30 Jan 30 07:00:39 crc kubenswrapper[4520]: I0130 07:00:39.261420 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zrdtq" event={"ID":"df706708-e03c-4d6e-ac65-229a419d653f","Type":"ContainerStarted","Data":"b8a2bcf19a893bed1ad6f2f72c7e82effd9aeaf462c80449133d328596ed3142"} Jan 30 07:00:39 crc kubenswrapper[4520]: I0130 07:00:39.266533 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"787adbf3-a537-453d-a7fc-efbbdec67245","Type":"ContainerStarted","Data":"1abceda6299f4a5ecf186330b1fdbbcc1c2b0ab0b2f8cea7f1a417289ab3a72e"} Jan 30 07:00:39 crc kubenswrapper[4520]: I0130 07:00:39.273323 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rdh5m" event={"ID":"390f8ec2-a783-45b8-a1c8-984400c11237","Type":"ContainerStarted","Data":"661c16f6d42a8ddbaba506d28036a5de14eaa583d1bfe9e0d8fddfba8343be26"} Jan 30 07:00:39 crc kubenswrapper[4520]: I0130 07:00:39.283577 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7445dc46fc-s424z" event={"ID":"07aa3f61-cfcb-4aa2-8430-e4f800dbf572","Type":"ContainerStarted","Data":"8b6d62a427a1144ed02fa2512156c0c6e62e866ada8f94b894bb531ee62d86f4"} Jan 30 07:00:39 crc kubenswrapper[4520]: I0130 07:00:39.284635 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=38.284614113 podStartE2EDuration="38.284614113s" podCreationTimestamp="2026-01-30 07:00:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:00:39.277489537 +0000 UTC m=+952.905841718" watchObservedRunningTime="2026-01-30 07:00:39.284614113 +0000 UTC m=+952.912966294" Jan 30 07:00:39 crc kubenswrapper[4520]: I0130 07:00:39.288182 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-t8smt" event={"ID":"0b1fa358-6b62-4cf6-a32c-89e98f169b42","Type":"ContainerStarted","Data":"2226d52d6e6304b6b79d579ee87ea0fef4db05f734eed40dc74be6d9a62eff0b"} Jan 30 07:00:39 crc kubenswrapper[4520]: I0130 07:00:39.304557 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-t8smt" podStartSLOduration=6.709490646 podStartE2EDuration="45.304536427s" podCreationTimestamp="2026-01-30 06:59:54 +0000 UTC" firstStartedPulling="2026-01-30 06:59:56.849456131 +0000 UTC m=+910.477808312" lastFinishedPulling="2026-01-30 07:00:35.444501912 +0000 UTC m=+949.072854093" observedRunningTime="2026-01-30 07:00:39.302997745 +0000 UTC m=+952.931349925" watchObservedRunningTime="2026-01-30 07:00:39.304536427 +0000 UTC m=+952.932888609" Jan 30 07:00:39 crc kubenswrapper[4520]: I0130 07:00:39.309555 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bb59b888-snb5k" event={"ID":"2cd8643f-309c-46b4-bc83-4a8548e98403","Type":"ContainerStarted","Data":"ef0aebc2a60a38de685158277404e0ffbbc0857912a670dbce26f262284150b1"} Jan 30 07:00:39 crc kubenswrapper[4520]: I0130 07:00:39.316232 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d9dd85bbd-2g75n" event={"ID":"bcc0bac1-6294-432a-8703-fbef10b2a44f","Type":"ContainerStarted","Data":"3fc935fe231c20a524900386a51d2f43f6dffa5f03e76e56de8c467e3b683f3c"} Jan 30 07:00:39 crc kubenswrapper[4520]: I0130 07:00:39.326636 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c459697cb-g922m" event={"ID":"3380703e-5659-4040-8b43-e3ada0eaa6b6","Type":"ContainerStarted","Data":"06337955ea19d6817ae9d812ae722f4d62d5f6f41377f0a593f497c064f9b33c"} Jan 30 07:00:39 crc kubenswrapper[4520]: I0130 07:00:39.333434 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4efe190c-047a-4463-9044-515816c2a7e1","Type":"ContainerStarted","Data":"21536f66b139a50e5fd8cfe52814b014f0f4fa2d3f0e61b68d8c97bd5b1ea26f"} Jan 30 07:00:39 crc kubenswrapper[4520]: I0130 07:00:39.343714 4520 generic.go:334] "Generic (PLEG): container finished" podID="2336abfe-2191-4b5f-92bd-2077f6051a52" containerID="fae679a885187e9e7526d2a5cdddf61022fc4fb70619c2be743b77b3ebecdc17" exitCode=0 Jan 30 07:00:39 crc kubenswrapper[4520]: I0130 07:00:39.343754 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549d55ddbc-cfmfx" event={"ID":"2336abfe-2191-4b5f-92bd-2077f6051a52","Type":"ContainerDied","Data":"fae679a885187e9e7526d2a5cdddf61022fc4fb70619c2be743b77b3ebecdc17"} Jan 30 07:00:39 crc kubenswrapper[4520]: I0130 07:00:39.343775 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549d55ddbc-cfmfx" event={"ID":"2336abfe-2191-4b5f-92bd-2077f6051a52","Type":"ContainerStarted","Data":"2b94d9e5799981260df75d82cc68717c8311598496fc3290ff16cf0fd3541852"} Jan 30 07:00:39 crc kubenswrapper[4520]: I0130 07:00:39.862959 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 07:00:39 crc kubenswrapper[4520]: I0130 07:00:39.935199 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\" (UID: \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\") " Jan 30 07:00:39 crc kubenswrapper[4520]: I0130 07:00:39.935580 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-config-data\") pod \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\" (UID: \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\") " Jan 30 07:00:39 crc kubenswrapper[4520]: I0130 07:00:39.935739 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-internal-tls-certs\") pod \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\" (UID: \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\") " Jan 30 07:00:39 crc kubenswrapper[4520]: I0130 07:00:39.935821 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-httpd-run\") pod \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\" (UID: \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\") " Jan 30 07:00:39 crc kubenswrapper[4520]: I0130 07:00:39.935869 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-logs\") pod \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\" (UID: \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\") " Jan 30 07:00:39 crc kubenswrapper[4520]: I0130 07:00:39.935971 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-combined-ca-bundle\") pod \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\" (UID: \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\") " Jan 30 07:00:39 crc kubenswrapper[4520]: I0130 07:00:39.936012 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-scripts\") pod \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\" (UID: \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\") " Jan 30 07:00:39 crc kubenswrapper[4520]: I0130 07:00:39.936042 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jn7h\" (UniqueName: \"kubernetes.io/projected/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-kube-api-access-5jn7h\") pod \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\" (UID: \"68675c3f-bc31-4c90-9cfc-a0cfb0e05046\") " Jan 30 07:00:39 crc kubenswrapper[4520]: I0130 07:00:39.940139 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "68675c3f-bc31-4c90-9cfc-a0cfb0e05046" (UID: "68675c3f-bc31-4c90-9cfc-a0cfb0e05046"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:00:39 crc kubenswrapper[4520]: I0130 07:00:39.943108 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-logs" (OuterVolumeSpecName: "logs") pod "68675c3f-bc31-4c90-9cfc-a0cfb0e05046" (UID: "68675c3f-bc31-4c90-9cfc-a0cfb0e05046"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:00:39 crc kubenswrapper[4520]: I0130 07:00:39.968184 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "68675c3f-bc31-4c90-9cfc-a0cfb0e05046" (UID: "68675c3f-bc31-4c90-9cfc-a0cfb0e05046"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 07:00:39 crc kubenswrapper[4520]: I0130 07:00:39.972046 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-scripts" (OuterVolumeSpecName: "scripts") pod "68675c3f-bc31-4c90-9cfc-a0cfb0e05046" (UID: "68675c3f-bc31-4c90-9cfc-a0cfb0e05046"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:00:39 crc kubenswrapper[4520]: I0130 07:00:39.972108 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-kube-api-access-5jn7h" (OuterVolumeSpecName: "kube-api-access-5jn7h") pod "68675c3f-bc31-4c90-9cfc-a0cfb0e05046" (UID: "68675c3f-bc31-4c90-9cfc-a0cfb0e05046"). InnerVolumeSpecName "kube-api-access-5jn7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.037851 4520 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.037880 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jn7h\" (UniqueName: \"kubernetes.io/projected/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-kube-api-access-5jn7h\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.037907 4520 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.037919 4520 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.037927 4520 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-logs\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.045266 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "68675c3f-bc31-4c90-9cfc-a0cfb0e05046" (UID: "68675c3f-bc31-4c90-9cfc-a0cfb0e05046"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.045890 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68675c3f-bc31-4c90-9cfc-a0cfb0e05046" (UID: "68675c3f-bc31-4c90-9cfc-a0cfb0e05046"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.077606 4520 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.089596 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-config-data" (OuterVolumeSpecName: "config-data") pod "68675c3f-bc31-4c90-9cfc-a0cfb0e05046" (UID: "68675c3f-bc31-4c90-9cfc-a0cfb0e05046"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.141843 4520 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.141874 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.141889 4520 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.141900 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68675c3f-bc31-4c90-9cfc-a0cfb0e05046-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.383242 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7445dc46fc-s424z" event={"ID":"07aa3f61-cfcb-4aa2-8430-e4f800dbf572","Type":"ContainerStarted","Data":"be31646e606daa8921125c772c609b179e4fdced55dbbd3d1d7da3abaff7801a"} Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.383537 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7445dc46fc-s424z" event={"ID":"07aa3f61-cfcb-4aa2-8430-e4f800dbf572","Type":"ContainerStarted","Data":"c300fc62e1373c388229a82c0d2f920a528002128dc058a25a8b291ab97f13c0"} Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.383655 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7445dc46fc-s424z" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.388087 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d9dd85bbd-2g75n" event={"ID":"bcc0bac1-6294-432a-8703-fbef10b2a44f","Type":"ContainerStarted","Data":"a548c07c9ba71fd99fb8d3d0f4cc406fd7a3172166baa4c7de17dfd9482ceed6"} Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.388113 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d9dd85bbd-2g75n" event={"ID":"bcc0bac1-6294-432a-8703-fbef10b2a44f","Type":"ContainerStarted","Data":"dec0c362f39f4f9c6c8ec3ff8710ebebc258b52c5db4b75d9897a1bd7206469a"} Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.395603 4520 generic.go:334] "Generic (PLEG): container finished" podID="68675c3f-bc31-4c90-9cfc-a0cfb0e05046" containerID="037cc10c3c2e2f4a8bdc01ea4c4e8da395f589c15db2bcf7025eda33728b2aa5" exitCode=0 Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.395625 4520 generic.go:334] "Generic (PLEG): container finished" podID="68675c3f-bc31-4c90-9cfc-a0cfb0e05046" containerID="87080c8365ec46d2b7b53e548a25ffea9342cea05d1f9fef2affda0c0a73c9a8" exitCode=143 Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.395664 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"68675c3f-bc31-4c90-9cfc-a0cfb0e05046","Type":"ContainerDied","Data":"037cc10c3c2e2f4a8bdc01ea4c4e8da395f589c15db2bcf7025eda33728b2aa5"} Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.395680 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"68675c3f-bc31-4c90-9cfc-a0cfb0e05046","Type":"ContainerDied","Data":"87080c8365ec46d2b7b53e548a25ffea9342cea05d1f9fef2affda0c0a73c9a8"} Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.395690 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"68675c3f-bc31-4c90-9cfc-a0cfb0e05046","Type":"ContainerDied","Data":"f182a343f954af2767ac826792e0a760b611a744b47d2c789f3a0c0b32660012"} Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.395704 4520 scope.go:117] "RemoveContainer" containerID="037cc10c3c2e2f4a8bdc01ea4c4e8da395f589c15db2bcf7025eda33728b2aa5" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.395819 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.401659 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zrdtq" event={"ID":"df706708-e03c-4d6e-ac65-229a419d653f","Type":"ContainerStarted","Data":"acf5d09c7e94ac9bf5c6318c5b1c6a00d87a60284ad2d32f701bf5a5c0ee6bee"} Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.415104 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549d55ddbc-cfmfx" event={"ID":"2336abfe-2191-4b5f-92bd-2077f6051a52","Type":"ContainerStarted","Data":"e356745ca61bbc6db9c0e312560655ef1ecbaa123a0d96b37987d4a3c5aa44c3"} Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.415681 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-549d55ddbc-cfmfx" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.420989 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7445dc46fc-s424z" podStartSLOduration=11.420969218 podStartE2EDuration="11.420969218s" podCreationTimestamp="2026-01-30 07:00:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:00:40.415460861 +0000 UTC m=+954.043813042" watchObservedRunningTime="2026-01-30 07:00:40.420969218 +0000 UTC m=+954.049321399" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.422108 4520 generic.go:334] "Generic (PLEG): container finished" podID="390f8ec2-a783-45b8-a1c8-984400c11237" containerID="dfb14464303b8474f7c69d5100feb7870f7a8788c3a286c193bafd881288c6be" exitCode=0 Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.422169 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rdh5m" event={"ID":"390f8ec2-a783-45b8-a1c8-984400c11237","Type":"ContainerDied","Data":"dfb14464303b8474f7c69d5100feb7870f7a8788c3a286c193bafd881288c6be"} Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.427029 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bb59b888-snb5k" event={"ID":"2cd8643f-309c-46b4-bc83-4a8548e98403","Type":"ContainerStarted","Data":"7ecdddfc4824136d2fcf6c8ee6cc8e4129a06e861c8d6afd9ea9ffb1f08a36f8"} Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.427058 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bb59b888-snb5k" event={"ID":"2cd8643f-309c-46b4-bc83-4a8548e98403","Type":"ContainerStarted","Data":"1777b6cea22bdbdae82198be0676944390b1d9269a0d4ab14913b1a10002f318"} Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.427265 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7bb59b888-snb5k" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.434494 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c459697cb-g922m" event={"ID":"3380703e-5659-4040-8b43-e3ada0eaa6b6","Type":"ContainerStarted","Data":"d03bf2e75cec449c2d1120c53868d2b6ad99cf296b31eb75a042471f6bea2caa"} Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.434543 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c459697cb-g922m" event={"ID":"3380703e-5659-4040-8b43-e3ada0eaa6b6","Type":"ContainerStarted","Data":"2b747fc744b96278e67ea47a8f4cfb4393466c3789a5b3eca465bed0bea2d640"} Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.441384 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-zrdtq" podStartSLOduration=30.441345215 podStartE2EDuration="30.441345215s" podCreationTimestamp="2026-01-30 07:00:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:00:40.433074546 +0000 UTC m=+954.061426727" watchObservedRunningTime="2026-01-30 07:00:40.441345215 +0000 UTC m=+954.069697397" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.444460 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"787adbf3-a537-453d-a7fc-efbbdec67245","Type":"ContainerStarted","Data":"ef72d32e988252b7696fb6bdb1d9060db9878a67f2e9e493a010bf5f9aca2e05"} Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.450877 4520 scope.go:117] "RemoveContainer" containerID="87080c8365ec46d2b7b53e548a25ffea9342cea05d1f9fef2affda0c0a73c9a8" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.481588 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.496548 4520 scope.go:117] "RemoveContainer" containerID="037cc10c3c2e2f4a8bdc01ea4c4e8da395f589c15db2bcf7025eda33728b2aa5" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.498673 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 07:00:40 crc kubenswrapper[4520]: E0130 07:00:40.500634 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"037cc10c3c2e2f4a8bdc01ea4c4e8da395f589c15db2bcf7025eda33728b2aa5\": container with ID starting with 037cc10c3c2e2f4a8bdc01ea4c4e8da395f589c15db2bcf7025eda33728b2aa5 not found: ID does not exist" containerID="037cc10c3c2e2f4a8bdc01ea4c4e8da395f589c15db2bcf7025eda33728b2aa5" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.500681 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"037cc10c3c2e2f4a8bdc01ea4c4e8da395f589c15db2bcf7025eda33728b2aa5"} err="failed to get container status \"037cc10c3c2e2f4a8bdc01ea4c4e8da395f589c15db2bcf7025eda33728b2aa5\": rpc error: code = NotFound desc = could not find container \"037cc10c3c2e2f4a8bdc01ea4c4e8da395f589c15db2bcf7025eda33728b2aa5\": container with ID starting with 037cc10c3c2e2f4a8bdc01ea4c4e8da395f589c15db2bcf7025eda33728b2aa5 not found: ID does not exist" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.500711 4520 scope.go:117] "RemoveContainer" containerID="87080c8365ec46d2b7b53e548a25ffea9342cea05d1f9fef2affda0c0a73c9a8" Jan 30 07:00:40 crc kubenswrapper[4520]: E0130 07:00:40.501850 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87080c8365ec46d2b7b53e548a25ffea9342cea05d1f9fef2affda0c0a73c9a8\": container with ID starting with 87080c8365ec46d2b7b53e548a25ffea9342cea05d1f9fef2affda0c0a73c9a8 not found: ID does not exist" containerID="87080c8365ec46d2b7b53e548a25ffea9342cea05d1f9fef2affda0c0a73c9a8" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.501893 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87080c8365ec46d2b7b53e548a25ffea9342cea05d1f9fef2affda0c0a73c9a8"} err="failed to get container status \"87080c8365ec46d2b7b53e548a25ffea9342cea05d1f9fef2affda0c0a73c9a8\": rpc error: code = NotFound desc = could not find container \"87080c8365ec46d2b7b53e548a25ffea9342cea05d1f9fef2affda0c0a73c9a8\": container with ID starting with 87080c8365ec46d2b7b53e548a25ffea9342cea05d1f9fef2affda0c0a73c9a8 not found: ID does not exist" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.501915 4520 scope.go:117] "RemoveContainer" containerID="037cc10c3c2e2f4a8bdc01ea4c4e8da395f589c15db2bcf7025eda33728b2aa5" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.503028 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"037cc10c3c2e2f4a8bdc01ea4c4e8da395f589c15db2bcf7025eda33728b2aa5"} err="failed to get container status \"037cc10c3c2e2f4a8bdc01ea4c4e8da395f589c15db2bcf7025eda33728b2aa5\": rpc error: code = NotFound desc = could not find container \"037cc10c3c2e2f4a8bdc01ea4c4e8da395f589c15db2bcf7025eda33728b2aa5\": container with ID starting with 037cc10c3c2e2f4a8bdc01ea4c4e8da395f589c15db2bcf7025eda33728b2aa5 not found: ID does not exist" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.503049 4520 scope.go:117] "RemoveContainer" containerID="87080c8365ec46d2b7b53e548a25ffea9342cea05d1f9fef2affda0c0a73c9a8" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.505192 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87080c8365ec46d2b7b53e548a25ffea9342cea05d1f9fef2affda0c0a73c9a8"} err="failed to get container status \"87080c8365ec46d2b7b53e548a25ffea9342cea05d1f9fef2affda0c0a73c9a8\": rpc error: code = NotFound desc = could not find container \"87080c8365ec46d2b7b53e548a25ffea9342cea05d1f9fef2affda0c0a73c9a8\": container with ID starting with 87080c8365ec46d2b7b53e548a25ffea9342cea05d1f9fef2affda0c0a73c9a8 not found: ID does not exist" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.510148 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 07:00:40 crc kubenswrapper[4520]: E0130 07:00:40.510615 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68675c3f-bc31-4c90-9cfc-a0cfb0e05046" containerName="glance-log" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.510629 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="68675c3f-bc31-4c90-9cfc-a0cfb0e05046" containerName="glance-log" Jan 30 07:00:40 crc kubenswrapper[4520]: E0130 07:00:40.510670 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68675c3f-bc31-4c90-9cfc-a0cfb0e05046" containerName="glance-httpd" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.510677 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="68675c3f-bc31-4c90-9cfc-a0cfb0e05046" containerName="glance-httpd" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.510860 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="68675c3f-bc31-4c90-9cfc-a0cfb0e05046" containerName="glance-httpd" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.510907 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="68675c3f-bc31-4c90-9cfc-a0cfb0e05046" containerName="glance-log" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.511936 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.515626 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.515822 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.526248 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-d9dd85bbd-2g75n" podStartSLOduration=36.769742689 podStartE2EDuration="37.526226548s" podCreationTimestamp="2026-01-30 07:00:03 +0000 UTC" firstStartedPulling="2026-01-30 07:00:38.367013842 +0000 UTC m=+951.995366024" lastFinishedPulling="2026-01-30 07:00:39.123497702 +0000 UTC m=+952.751849883" observedRunningTime="2026-01-30 07:00:40.469668926 +0000 UTC m=+954.098021107" watchObservedRunningTime="2026-01-30 07:00:40.526226548 +0000 UTC m=+954.154578729" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.543562 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.545459 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-549d55ddbc-cfmfx" podStartSLOduration=13.545441293 podStartE2EDuration="13.545441293s" podCreationTimestamp="2026-01-30 07:00:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:00:40.502652066 +0000 UTC m=+954.131004248" watchObservedRunningTime="2026-01-30 07:00:40.545441293 +0000 UTC m=+954.173793464" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.549751 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-c459697cb-g922m" podStartSLOduration=36.824392221 podStartE2EDuration="37.549743621s" podCreationTimestamp="2026-01-30 07:00:03 +0000 UTC" firstStartedPulling="2026-01-30 07:00:38.556763925 +0000 UTC m=+952.185116106" lastFinishedPulling="2026-01-30 07:00:39.282115325 +0000 UTC m=+952.910467506" observedRunningTime="2026-01-30 07:00:40.534799817 +0000 UTC m=+954.163151997" watchObservedRunningTime="2026-01-30 07:00:40.549743621 +0000 UTC m=+954.178095793" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.584881 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7bb59b888-snb5k" podStartSLOduration=13.584860393 podStartE2EDuration="13.584860393s" podCreationTimestamp="2026-01-30 07:00:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:00:40.57719943 +0000 UTC m=+954.205551612" watchObservedRunningTime="2026-01-30 07:00:40.584860393 +0000 UTC m=+954.213212575" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.652971 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.653018 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.653047 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-logs\") pod \"glance-default-internal-api-0\" (UID: \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.653068 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.653126 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.653174 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.653197 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.653225 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdt78\" (UniqueName: \"kubernetes.io/projected/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-kube-api-access-mdt78\") pod \"glance-default-internal-api-0\" (UID: \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.701067 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68675c3f-bc31-4c90-9cfc-a0cfb0e05046" path="/var/lib/kubelet/pods/68675c3f-bc31-4c90-9cfc-a0cfb0e05046/volumes" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.755432 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdt78\" (UniqueName: \"kubernetes.io/projected/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-kube-api-access-mdt78\") pod \"glance-default-internal-api-0\" (UID: \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.755591 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.755620 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.755671 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-logs\") pod \"glance-default-internal-api-0\" (UID: \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.755698 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.755792 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.755883 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.755909 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.756050 4520 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.756601 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.756875 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-logs\") pod \"glance-default-internal-api-0\" (UID: \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.762134 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.762766 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.764089 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.764178 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.770717 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdt78\" (UniqueName: \"kubernetes.io/projected/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-kube-api-access-mdt78\") pod \"glance-default-internal-api-0\" (UID: \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.778953 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:00:40 crc kubenswrapper[4520]: I0130 07:00:40.835582 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 07:00:41 crc kubenswrapper[4520]: I0130 07:00:41.461474 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"787adbf3-a537-453d-a7fc-efbbdec67245","Type":"ContainerStarted","Data":"e8bb2877ea98fb6556ebc703ed33a000fb248bc107256b5ccb28d878fb9b762b"} Jan 30 07:00:41 crc kubenswrapper[4520]: I0130 07:00:41.465853 4520 generic.go:334] "Generic (PLEG): container finished" podID="0b1fa358-6b62-4cf6-a32c-89e98f169b42" containerID="2226d52d6e6304b6b79d579ee87ea0fef4db05f734eed40dc74be6d9a62eff0b" exitCode=0 Jan 30 07:00:41 crc kubenswrapper[4520]: I0130 07:00:41.465935 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-t8smt" event={"ID":"0b1fa358-6b62-4cf6-a32c-89e98f169b42","Type":"ContainerDied","Data":"2226d52d6e6304b6b79d579ee87ea0fef4db05f734eed40dc74be6d9a62eff0b"} Jan 30 07:00:41 crc kubenswrapper[4520]: I0130 07:00:41.538629 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=38.538608214 podStartE2EDuration="38.538608214s" podCreationTimestamp="2026-01-30 07:00:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:00:41.494285745 +0000 UTC m=+955.122637926" watchObservedRunningTime="2026-01-30 07:00:41.538608214 +0000 UTC m=+955.166960395" Jan 30 07:00:42 crc kubenswrapper[4520]: I0130 07:00:42.950670 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-t8smt" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.002295 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b1fa358-6b62-4cf6-a32c-89e98f169b42-combined-ca-bundle\") pod \"0b1fa358-6b62-4cf6-a32c-89e98f169b42\" (UID: \"0b1fa358-6b62-4cf6-a32c-89e98f169b42\") " Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.003312 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b1fa358-6b62-4cf6-a32c-89e98f169b42-scripts\") pod \"0b1fa358-6b62-4cf6-a32c-89e98f169b42\" (UID: \"0b1fa358-6b62-4cf6-a32c-89e98f169b42\") " Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.003367 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b1fa358-6b62-4cf6-a32c-89e98f169b42-logs\") pod \"0b1fa358-6b62-4cf6-a32c-89e98f169b42\" (UID: \"0b1fa358-6b62-4cf6-a32c-89e98f169b42\") " Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.003435 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b1fa358-6b62-4cf6-a32c-89e98f169b42-config-data\") pod \"0b1fa358-6b62-4cf6-a32c-89e98f169b42\" (UID: \"0b1fa358-6b62-4cf6-a32c-89e98f169b42\") " Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.003580 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5wcz\" (UniqueName: \"kubernetes.io/projected/0b1fa358-6b62-4cf6-a32c-89e98f169b42-kube-api-access-l5wcz\") pod \"0b1fa358-6b62-4cf6-a32c-89e98f169b42\" (UID: \"0b1fa358-6b62-4cf6-a32c-89e98f169b42\") " Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.003869 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b1fa358-6b62-4cf6-a32c-89e98f169b42-logs" (OuterVolumeSpecName: "logs") pod "0b1fa358-6b62-4cf6-a32c-89e98f169b42" (UID: "0b1fa358-6b62-4cf6-a32c-89e98f169b42"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.004307 4520 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b1fa358-6b62-4cf6-a32c-89e98f169b42-logs\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.008581 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b1fa358-6b62-4cf6-a32c-89e98f169b42-scripts" (OuterVolumeSpecName: "scripts") pod "0b1fa358-6b62-4cf6-a32c-89e98f169b42" (UID: "0b1fa358-6b62-4cf6-a32c-89e98f169b42"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.012678 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b1fa358-6b62-4cf6-a32c-89e98f169b42-kube-api-access-l5wcz" (OuterVolumeSpecName: "kube-api-access-l5wcz") pod "0b1fa358-6b62-4cf6-a32c-89e98f169b42" (UID: "0b1fa358-6b62-4cf6-a32c-89e98f169b42"). InnerVolumeSpecName "kube-api-access-l5wcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.041565 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b1fa358-6b62-4cf6-a32c-89e98f169b42-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b1fa358-6b62-4cf6-a32c-89e98f169b42" (UID: "0b1fa358-6b62-4cf6-a32c-89e98f169b42"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.054676 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b1fa358-6b62-4cf6-a32c-89e98f169b42-config-data" (OuterVolumeSpecName: "config-data") pod "0b1fa358-6b62-4cf6-a32c-89e98f169b42" (UID: "0b1fa358-6b62-4cf6-a32c-89e98f169b42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.106804 4520 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b1fa358-6b62-4cf6-a32c-89e98f169b42-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.106836 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b1fa358-6b62-4cf6-a32c-89e98f169b42-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.106848 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5wcz\" (UniqueName: \"kubernetes.io/projected/0b1fa358-6b62-4cf6-a32c-89e98f169b42-kube-api-access-l5wcz\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.106859 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b1fa358-6b62-4cf6-a32c-89e98f169b42-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.136962 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 07:00:43 crc kubenswrapper[4520]: W0130 07:00:43.137237 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe9a112d_54bd_4ecd_bd57_5649fb5ae79f.slice/crio-de4c12bcddde17643951dca029a3f16812a28d56a8fd6bb5ba7452f9e6032fc9 WatchSource:0}: Error finding container de4c12bcddde17643951dca029a3f16812a28d56a8fd6bb5ba7452f9e6032fc9: Status 404 returned error can't find the container with id de4c12bcddde17643951dca029a3f16812a28d56a8fd6bb5ba7452f9e6032fc9 Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.498882 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"be9a112d-54bd-4ecd-bd57-5649fb5ae79f","Type":"ContainerStarted","Data":"de4c12bcddde17643951dca029a3f16812a28d56a8fd6bb5ba7452f9e6032fc9"} Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.502952 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4efe190c-047a-4463-9044-515816c2a7e1","Type":"ContainerStarted","Data":"047e969204cfcb2f6398ccfc5932c7060b7a8b55c1a6c94163906219a8c6de03"} Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.505800 4520 generic.go:334] "Generic (PLEG): container finished" podID="390f8ec2-a783-45b8-a1c8-984400c11237" containerID="5a96c4375a563abd82dddeb2825a0ff6b64801afb06bbf05306d40bdf9ec7020" exitCode=0 Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.506073 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rdh5m" event={"ID":"390f8ec2-a783-45b8-a1c8-984400c11237","Type":"ContainerDied","Data":"5a96c4375a563abd82dddeb2825a0ff6b64801afb06bbf05306d40bdf9ec7020"} Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.509375 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-t8smt" event={"ID":"0b1fa358-6b62-4cf6-a32c-89e98f169b42","Type":"ContainerDied","Data":"a086183915d986c150d57ab719b79f1867b24718210424dc9ff5debc826d4844"} Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.509415 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a086183915d986c150d57ab719b79f1867b24718210424dc9ff5debc826d4844" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.509469 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-t8smt" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.632339 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-bf4fcb464-scxkz"] Jan 30 07:00:43 crc kubenswrapper[4520]: E0130 07:00:43.632782 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b1fa358-6b62-4cf6-a32c-89e98f169b42" containerName="placement-db-sync" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.632804 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b1fa358-6b62-4cf6-a32c-89e98f169b42" containerName="placement-db-sync" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.632987 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b1fa358-6b62-4cf6-a32c-89e98f169b42" containerName="placement-db-sync" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.644218 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bf4fcb464-scxkz" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.648418 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.649456 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.649707 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-n8t4s" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.649841 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.649934 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.675820 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-bf4fcb464-scxkz"] Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.728896 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-combined-ca-bundle\") pod \"placement-bf4fcb464-scxkz\" (UID: \"5aad2c74-01f1-4dd2-95b4-5e4299adcb99\") " pod="openstack/placement-bf4fcb464-scxkz" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.729076 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-logs\") pod \"placement-bf4fcb464-scxkz\" (UID: \"5aad2c74-01f1-4dd2-95b4-5e4299adcb99\") " pod="openstack/placement-bf4fcb464-scxkz" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.729151 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-config-data\") pod \"placement-bf4fcb464-scxkz\" (UID: \"5aad2c74-01f1-4dd2-95b4-5e4299adcb99\") " pod="openstack/placement-bf4fcb464-scxkz" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.729195 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn2hn\" (UniqueName: \"kubernetes.io/projected/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-kube-api-access-pn2hn\") pod \"placement-bf4fcb464-scxkz\" (UID: \"5aad2c74-01f1-4dd2-95b4-5e4299adcb99\") " pod="openstack/placement-bf4fcb464-scxkz" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.729402 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-internal-tls-certs\") pod \"placement-bf4fcb464-scxkz\" (UID: \"5aad2c74-01f1-4dd2-95b4-5e4299adcb99\") " pod="openstack/placement-bf4fcb464-scxkz" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.729458 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-public-tls-certs\") pod \"placement-bf4fcb464-scxkz\" (UID: \"5aad2c74-01f1-4dd2-95b4-5e4299adcb99\") " pod="openstack/placement-bf4fcb464-scxkz" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.729534 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-scripts\") pod \"placement-bf4fcb464-scxkz\" (UID: \"5aad2c74-01f1-4dd2-95b4-5e4299adcb99\") " pod="openstack/placement-bf4fcb464-scxkz" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.831910 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-combined-ca-bundle\") pod \"placement-bf4fcb464-scxkz\" (UID: \"5aad2c74-01f1-4dd2-95b4-5e4299adcb99\") " pod="openstack/placement-bf4fcb464-scxkz" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.831990 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-logs\") pod \"placement-bf4fcb464-scxkz\" (UID: \"5aad2c74-01f1-4dd2-95b4-5e4299adcb99\") " pod="openstack/placement-bf4fcb464-scxkz" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.832024 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-config-data\") pod \"placement-bf4fcb464-scxkz\" (UID: \"5aad2c74-01f1-4dd2-95b4-5e4299adcb99\") " pod="openstack/placement-bf4fcb464-scxkz" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.832051 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn2hn\" (UniqueName: \"kubernetes.io/projected/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-kube-api-access-pn2hn\") pod \"placement-bf4fcb464-scxkz\" (UID: \"5aad2c74-01f1-4dd2-95b4-5e4299adcb99\") " pod="openstack/placement-bf4fcb464-scxkz" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.832141 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-internal-tls-certs\") pod \"placement-bf4fcb464-scxkz\" (UID: \"5aad2c74-01f1-4dd2-95b4-5e4299adcb99\") " pod="openstack/placement-bf4fcb464-scxkz" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.832178 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-public-tls-certs\") pod \"placement-bf4fcb464-scxkz\" (UID: \"5aad2c74-01f1-4dd2-95b4-5e4299adcb99\") " pod="openstack/placement-bf4fcb464-scxkz" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.832211 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-scripts\") pod \"placement-bf4fcb464-scxkz\" (UID: \"5aad2c74-01f1-4dd2-95b4-5e4299adcb99\") " pod="openstack/placement-bf4fcb464-scxkz" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.833079 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-logs\") pod \"placement-bf4fcb464-scxkz\" (UID: \"5aad2c74-01f1-4dd2-95b4-5e4299adcb99\") " pod="openstack/placement-bf4fcb464-scxkz" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.836701 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-config-data\") pod \"placement-bf4fcb464-scxkz\" (UID: \"5aad2c74-01f1-4dd2-95b4-5e4299adcb99\") " pod="openstack/placement-bf4fcb464-scxkz" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.840304 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-internal-tls-certs\") pod \"placement-bf4fcb464-scxkz\" (UID: \"5aad2c74-01f1-4dd2-95b4-5e4299adcb99\") " pod="openstack/placement-bf4fcb464-scxkz" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.843218 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-combined-ca-bundle\") pod \"placement-bf4fcb464-scxkz\" (UID: \"5aad2c74-01f1-4dd2-95b4-5e4299adcb99\") " pod="openstack/placement-bf4fcb464-scxkz" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.843778 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-scripts\") pod \"placement-bf4fcb464-scxkz\" (UID: \"5aad2c74-01f1-4dd2-95b4-5e4299adcb99\") " pod="openstack/placement-bf4fcb464-scxkz" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.854160 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-public-tls-certs\") pod \"placement-bf4fcb464-scxkz\" (UID: \"5aad2c74-01f1-4dd2-95b4-5e4299adcb99\") " pod="openstack/placement-bf4fcb464-scxkz" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.861904 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn2hn\" (UniqueName: \"kubernetes.io/projected/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-kube-api-access-pn2hn\") pod \"placement-bf4fcb464-scxkz\" (UID: \"5aad2c74-01f1-4dd2-95b4-5e4299adcb99\") " pod="openstack/placement-bf4fcb464-scxkz" Jan 30 07:00:43 crc kubenswrapper[4520]: I0130 07:00:43.998702 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bf4fcb464-scxkz" Jan 30 07:00:44 crc kubenswrapper[4520]: I0130 07:00:44.174201 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-c459697cb-g922m" Jan 30 07:00:44 crc kubenswrapper[4520]: I0130 07:00:44.174263 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-c459697cb-g922m" Jan 30 07:00:44 crc kubenswrapper[4520]: I0130 07:00:44.354627 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 07:00:44 crc kubenswrapper[4520]: I0130 07:00:44.354895 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 07:00:44 crc kubenswrapper[4520]: I0130 07:00:44.371801 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-d9dd85bbd-2g75n" Jan 30 07:00:44 crc kubenswrapper[4520]: I0130 07:00:44.372435 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-d9dd85bbd-2g75n" Jan 30 07:00:44 crc kubenswrapper[4520]: I0130 07:00:44.411727 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 07:00:44 crc kubenswrapper[4520]: I0130 07:00:44.439476 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 07:00:44 crc kubenswrapper[4520]: I0130 07:00:44.512634 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-bf4fcb464-scxkz"] Jan 30 07:00:44 crc kubenswrapper[4520]: I0130 07:00:44.523489 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"be9a112d-54bd-4ecd-bd57-5649fb5ae79f","Type":"ContainerStarted","Data":"37c52a65cacf4ff4c8e717a6432b07b0aa845022d36485b2f1128811dd9e3c3a"} Jan 30 07:00:44 crc kubenswrapper[4520]: I0130 07:00:44.535565 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rdh5m" event={"ID":"390f8ec2-a783-45b8-a1c8-984400c11237","Type":"ContainerStarted","Data":"2f5945ee55f097cb3b9297af653fef38d9683cfaa3cac76f84aee0e877d8b98d"} Jan 30 07:00:44 crc kubenswrapper[4520]: I0130 07:00:44.535717 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 07:00:44 crc kubenswrapper[4520]: I0130 07:00:44.535938 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 07:00:44 crc kubenswrapper[4520]: I0130 07:00:44.629949 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rdh5m" podStartSLOduration=6.03100694 podStartE2EDuration="9.629926562s" podCreationTimestamp="2026-01-30 07:00:35 +0000 UTC" firstStartedPulling="2026-01-30 07:00:40.453813586 +0000 UTC m=+954.082165768" lastFinishedPulling="2026-01-30 07:00:44.052733209 +0000 UTC m=+957.681085390" observedRunningTime="2026-01-30 07:00:44.585807716 +0000 UTC m=+958.214159897" watchObservedRunningTime="2026-01-30 07:00:44.629926562 +0000 UTC m=+958.258278743" Jan 30 07:00:45 crc kubenswrapper[4520]: I0130 07:00:45.562746 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bf4fcb464-scxkz" event={"ID":"5aad2c74-01f1-4dd2-95b4-5e4299adcb99","Type":"ContainerStarted","Data":"43e045f8d849846804154cb1fda8fb475c7e12ef5d17a3328b4f99e3ec97f433"} Jan 30 07:00:45 crc kubenswrapper[4520]: I0130 07:00:45.563405 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bf4fcb464-scxkz" event={"ID":"5aad2c74-01f1-4dd2-95b4-5e4299adcb99","Type":"ContainerStarted","Data":"5d399f358896d91393d236f63b604bde350dc5c8e3ef19d92cc3d285d1ad44a1"} Jan 30 07:00:45 crc kubenswrapper[4520]: I0130 07:00:45.563420 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bf4fcb464-scxkz" event={"ID":"5aad2c74-01f1-4dd2-95b4-5e4299adcb99","Type":"ContainerStarted","Data":"556e9625966c7cb4e3e7a23fa74c7655fdabb1a8eb235006f30bdcb0198383d3"} Jan 30 07:00:45 crc kubenswrapper[4520]: I0130 07:00:45.564710 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-bf4fcb464-scxkz" Jan 30 07:00:45 crc kubenswrapper[4520]: I0130 07:00:45.564789 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-bf4fcb464-scxkz" Jan 30 07:00:45 crc kubenswrapper[4520]: I0130 07:00:45.572241 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"be9a112d-54bd-4ecd-bd57-5649fb5ae79f","Type":"ContainerStarted","Data":"031bb27045d77e81a0ede0f6f9ccfebeb7a22a66da950d24f197e63f2ec65d97"} Jan 30 07:00:45 crc kubenswrapper[4520]: I0130 07:00:45.576091 4520 generic.go:334] "Generic (PLEG): container finished" podID="df706708-e03c-4d6e-ac65-229a419d653f" containerID="acf5d09c7e94ac9bf5c6318c5b1c6a00d87a60284ad2d32f701bf5a5c0ee6bee" exitCode=0 Jan 30 07:00:45 crc kubenswrapper[4520]: I0130 07:00:45.577440 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zrdtq" event={"ID":"df706708-e03c-4d6e-ac65-229a419d653f","Type":"ContainerDied","Data":"acf5d09c7e94ac9bf5c6318c5b1c6a00d87a60284ad2d32f701bf5a5c0ee6bee"} Jan 30 07:00:45 crc kubenswrapper[4520]: I0130 07:00:45.591343 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-bf4fcb464-scxkz" podStartSLOduration=2.591334965 podStartE2EDuration="2.591334965s" podCreationTimestamp="2026-01-30 07:00:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:00:45.588672289 +0000 UTC m=+959.217024470" watchObservedRunningTime="2026-01-30 07:00:45.591334965 +0000 UTC m=+959.219687145" Jan 30 07:00:45 crc kubenswrapper[4520]: I0130 07:00:45.641642 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.641621527 podStartE2EDuration="5.641621527s" podCreationTimestamp="2026-01-30 07:00:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:00:45.638093223 +0000 UTC m=+959.266445405" watchObservedRunningTime="2026-01-30 07:00:45.641621527 +0000 UTC m=+959.269973709" Jan 30 07:00:46 crc kubenswrapper[4520]: I0130 07:00:46.079056 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rdh5m" Jan 30 07:00:46 crc kubenswrapper[4520]: I0130 07:00:46.079368 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rdh5m" Jan 30 07:00:46 crc kubenswrapper[4520]: I0130 07:00:46.591857 4520 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 07:00:47 crc kubenswrapper[4520]: I0130 07:00:47.130962 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-rdh5m" podUID="390f8ec2-a783-45b8-a1c8-984400c11237" containerName="registry-server" probeResult="failure" output=< Jan 30 07:00:47 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 07:00:47 crc kubenswrapper[4520]: > Jan 30 07:00:47 crc kubenswrapper[4520]: I0130 07:00:47.653182 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-549d55ddbc-cfmfx" Jan 30 07:00:47 crc kubenswrapper[4520]: I0130 07:00:47.770561 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c8c7b95dc-bv8zz"] Jan 30 07:00:47 crc kubenswrapper[4520]: I0130 07:00:47.770918 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c8c7b95dc-bv8zz" podUID="e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726" containerName="dnsmasq-dns" containerID="cri-o://a90541b3e8d03f9618d7f923b5e099ccb396941d7cdd4949571451bdb9a20917" gracePeriod=10 Jan 30 07:00:48 crc kubenswrapper[4520]: I0130 07:00:48.565875 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 07:00:48 crc kubenswrapper[4520]: I0130 07:00:48.566186 4520 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 07:00:48 crc kubenswrapper[4520]: I0130 07:00:48.579191 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 07:00:48 crc kubenswrapper[4520]: I0130 07:00:48.658229 4520 generic.go:334] "Generic (PLEG): container finished" podID="e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726" containerID="a90541b3e8d03f9618d7f923b5e099ccb396941d7cdd4949571451bdb9a20917" exitCode=0 Jan 30 07:00:48 crc kubenswrapper[4520]: I0130 07:00:48.660124 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c8c7b95dc-bv8zz" event={"ID":"e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726","Type":"ContainerDied","Data":"a90541b3e8d03f9618d7f923b5e099ccb396941d7cdd4949571451bdb9a20917"} Jan 30 07:00:49 crc kubenswrapper[4520]: I0130 07:00:49.275022 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zrdtq" Jan 30 07:00:49 crc kubenswrapper[4520]: I0130 07:00:49.405335 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df706708-e03c-4d6e-ac65-229a419d653f-fernet-keys\") pod \"df706708-e03c-4d6e-ac65-229a419d653f\" (UID: \"df706708-e03c-4d6e-ac65-229a419d653f\") " Jan 30 07:00:49 crc kubenswrapper[4520]: I0130 07:00:49.405538 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df706708-e03c-4d6e-ac65-229a419d653f-credential-keys\") pod \"df706708-e03c-4d6e-ac65-229a419d653f\" (UID: \"df706708-e03c-4d6e-ac65-229a419d653f\") " Jan 30 07:00:49 crc kubenswrapper[4520]: I0130 07:00:49.405608 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df706708-e03c-4d6e-ac65-229a419d653f-config-data\") pod \"df706708-e03c-4d6e-ac65-229a419d653f\" (UID: \"df706708-e03c-4d6e-ac65-229a419d653f\") " Jan 30 07:00:49 crc kubenswrapper[4520]: I0130 07:00:49.405700 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df706708-e03c-4d6e-ac65-229a419d653f-combined-ca-bundle\") pod \"df706708-e03c-4d6e-ac65-229a419d653f\" (UID: \"df706708-e03c-4d6e-ac65-229a419d653f\") " Jan 30 07:00:49 crc kubenswrapper[4520]: I0130 07:00:49.405749 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dvcd\" (UniqueName: \"kubernetes.io/projected/df706708-e03c-4d6e-ac65-229a419d653f-kube-api-access-4dvcd\") pod \"df706708-e03c-4d6e-ac65-229a419d653f\" (UID: \"df706708-e03c-4d6e-ac65-229a419d653f\") " Jan 30 07:00:49 crc kubenswrapper[4520]: I0130 07:00:49.405857 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df706708-e03c-4d6e-ac65-229a419d653f-scripts\") pod \"df706708-e03c-4d6e-ac65-229a419d653f\" (UID: \"df706708-e03c-4d6e-ac65-229a419d653f\") " Jan 30 07:00:49 crc kubenswrapper[4520]: I0130 07:00:49.416454 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df706708-e03c-4d6e-ac65-229a419d653f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "df706708-e03c-4d6e-ac65-229a419d653f" (UID: "df706708-e03c-4d6e-ac65-229a419d653f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:00:49 crc kubenswrapper[4520]: I0130 07:00:49.416984 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df706708-e03c-4d6e-ac65-229a419d653f-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "df706708-e03c-4d6e-ac65-229a419d653f" (UID: "df706708-e03c-4d6e-ac65-229a419d653f"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:00:49 crc kubenswrapper[4520]: I0130 07:00:49.420632 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df706708-e03c-4d6e-ac65-229a419d653f-scripts" (OuterVolumeSpecName: "scripts") pod "df706708-e03c-4d6e-ac65-229a419d653f" (UID: "df706708-e03c-4d6e-ac65-229a419d653f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:00:49 crc kubenswrapper[4520]: I0130 07:00:49.422439 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df706708-e03c-4d6e-ac65-229a419d653f-kube-api-access-4dvcd" (OuterVolumeSpecName: "kube-api-access-4dvcd") pod "df706708-e03c-4d6e-ac65-229a419d653f" (UID: "df706708-e03c-4d6e-ac65-229a419d653f"). InnerVolumeSpecName "kube-api-access-4dvcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:00:49 crc kubenswrapper[4520]: I0130 07:00:49.487603 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df706708-e03c-4d6e-ac65-229a419d653f-config-data" (OuterVolumeSpecName: "config-data") pod "df706708-e03c-4d6e-ac65-229a419d653f" (UID: "df706708-e03c-4d6e-ac65-229a419d653f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:00:49 crc kubenswrapper[4520]: I0130 07:00:49.488411 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df706708-e03c-4d6e-ac65-229a419d653f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df706708-e03c-4d6e-ac65-229a419d653f" (UID: "df706708-e03c-4d6e-ac65-229a419d653f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:00:49 crc kubenswrapper[4520]: I0130 07:00:49.508123 4520 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df706708-e03c-4d6e-ac65-229a419d653f-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:49 crc kubenswrapper[4520]: I0130 07:00:49.508154 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df706708-e03c-4d6e-ac65-229a419d653f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:49 crc kubenswrapper[4520]: I0130 07:00:49.508165 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df706708-e03c-4d6e-ac65-229a419d653f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:49 crc kubenswrapper[4520]: I0130 07:00:49.508176 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dvcd\" (UniqueName: \"kubernetes.io/projected/df706708-e03c-4d6e-ac65-229a419d653f-kube-api-access-4dvcd\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:49 crc kubenswrapper[4520]: I0130 07:00:49.508186 4520 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df706708-e03c-4d6e-ac65-229a419d653f-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:49 crc kubenswrapper[4520]: I0130 07:00:49.508195 4520 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df706708-e03c-4d6e-ac65-229a419d653f-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:49 crc kubenswrapper[4520]: I0130 07:00:49.676245 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zrdtq" event={"ID":"df706708-e03c-4d6e-ac65-229a419d653f","Type":"ContainerDied","Data":"b8a2bcf19a893bed1ad6f2f72c7e82effd9aeaf462c80449133d328596ed3142"} Jan 30 07:00:49 crc kubenswrapper[4520]: I0130 07:00:49.676292 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8a2bcf19a893bed1ad6f2f72c7e82effd9aeaf462c80449133d328596ed3142" Jan 30 07:00:49 crc kubenswrapper[4520]: I0130 07:00:49.676304 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zrdtq" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.447831 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7c8c7b95dc-bv8zz" podUID="e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: connect: connection refused" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.510576 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-59d84c9dc8-9scqq"] Jan 30 07:00:50 crc kubenswrapper[4520]: E0130 07:00:50.511090 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df706708-e03c-4d6e-ac65-229a419d653f" containerName="keystone-bootstrap" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.511111 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="df706708-e03c-4d6e-ac65-229a419d653f" containerName="keystone-bootstrap" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.511325 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="df706708-e03c-4d6e-ac65-229a419d653f" containerName="keystone-bootstrap" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.512148 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-59d84c9dc8-9scqq" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.518415 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.518592 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.518730 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.518897 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.519005 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jddpd" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.519120 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.529863 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-59d84c9dc8-9scqq"] Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.632554 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6363204f-a019-4617-ae64-9825f87969fa-scripts\") pod \"keystone-59d84c9dc8-9scqq\" (UID: \"6363204f-a019-4617-ae64-9825f87969fa\") " pod="openstack/keystone-59d84c9dc8-9scqq" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.632602 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6363204f-a019-4617-ae64-9825f87969fa-credential-keys\") pod \"keystone-59d84c9dc8-9scqq\" (UID: \"6363204f-a019-4617-ae64-9825f87969fa\") " pod="openstack/keystone-59d84c9dc8-9scqq" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.632656 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6363204f-a019-4617-ae64-9825f87969fa-fernet-keys\") pod \"keystone-59d84c9dc8-9scqq\" (UID: \"6363204f-a019-4617-ae64-9825f87969fa\") " pod="openstack/keystone-59d84c9dc8-9scqq" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.632689 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nchm5\" (UniqueName: \"kubernetes.io/projected/6363204f-a019-4617-ae64-9825f87969fa-kube-api-access-nchm5\") pod \"keystone-59d84c9dc8-9scqq\" (UID: \"6363204f-a019-4617-ae64-9825f87969fa\") " pod="openstack/keystone-59d84c9dc8-9scqq" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.632746 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6363204f-a019-4617-ae64-9825f87969fa-internal-tls-certs\") pod \"keystone-59d84c9dc8-9scqq\" (UID: \"6363204f-a019-4617-ae64-9825f87969fa\") " pod="openstack/keystone-59d84c9dc8-9scqq" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.632812 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6363204f-a019-4617-ae64-9825f87969fa-config-data\") pod \"keystone-59d84c9dc8-9scqq\" (UID: \"6363204f-a019-4617-ae64-9825f87969fa\") " pod="openstack/keystone-59d84c9dc8-9scqq" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.632836 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6363204f-a019-4617-ae64-9825f87969fa-public-tls-certs\") pod \"keystone-59d84c9dc8-9scqq\" (UID: \"6363204f-a019-4617-ae64-9825f87969fa\") " pod="openstack/keystone-59d84c9dc8-9scqq" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.632887 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6363204f-a019-4617-ae64-9825f87969fa-combined-ca-bundle\") pod \"keystone-59d84c9dc8-9scqq\" (UID: \"6363204f-a019-4617-ae64-9825f87969fa\") " pod="openstack/keystone-59d84c9dc8-9scqq" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.739300 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6363204f-a019-4617-ae64-9825f87969fa-combined-ca-bundle\") pod \"keystone-59d84c9dc8-9scqq\" (UID: \"6363204f-a019-4617-ae64-9825f87969fa\") " pod="openstack/keystone-59d84c9dc8-9scqq" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.739366 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6363204f-a019-4617-ae64-9825f87969fa-scripts\") pod \"keystone-59d84c9dc8-9scqq\" (UID: \"6363204f-a019-4617-ae64-9825f87969fa\") " pod="openstack/keystone-59d84c9dc8-9scqq" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.739395 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6363204f-a019-4617-ae64-9825f87969fa-credential-keys\") pod \"keystone-59d84c9dc8-9scqq\" (UID: \"6363204f-a019-4617-ae64-9825f87969fa\") " pod="openstack/keystone-59d84c9dc8-9scqq" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.739441 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6363204f-a019-4617-ae64-9825f87969fa-fernet-keys\") pod \"keystone-59d84c9dc8-9scqq\" (UID: \"6363204f-a019-4617-ae64-9825f87969fa\") " pod="openstack/keystone-59d84c9dc8-9scqq" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.739488 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nchm5\" (UniqueName: \"kubernetes.io/projected/6363204f-a019-4617-ae64-9825f87969fa-kube-api-access-nchm5\") pod \"keystone-59d84c9dc8-9scqq\" (UID: \"6363204f-a019-4617-ae64-9825f87969fa\") " pod="openstack/keystone-59d84c9dc8-9scqq" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.739548 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6363204f-a019-4617-ae64-9825f87969fa-internal-tls-certs\") pod \"keystone-59d84c9dc8-9scqq\" (UID: \"6363204f-a019-4617-ae64-9825f87969fa\") " pod="openstack/keystone-59d84c9dc8-9scqq" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.739657 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6363204f-a019-4617-ae64-9825f87969fa-config-data\") pod \"keystone-59d84c9dc8-9scqq\" (UID: \"6363204f-a019-4617-ae64-9825f87969fa\") " pod="openstack/keystone-59d84c9dc8-9scqq" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.739691 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6363204f-a019-4617-ae64-9825f87969fa-public-tls-certs\") pod \"keystone-59d84c9dc8-9scqq\" (UID: \"6363204f-a019-4617-ae64-9825f87969fa\") " pod="openstack/keystone-59d84c9dc8-9scqq" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.754127 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6363204f-a019-4617-ae64-9825f87969fa-internal-tls-certs\") pod \"keystone-59d84c9dc8-9scqq\" (UID: \"6363204f-a019-4617-ae64-9825f87969fa\") " pod="openstack/keystone-59d84c9dc8-9scqq" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.754366 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6363204f-a019-4617-ae64-9825f87969fa-public-tls-certs\") pod \"keystone-59d84c9dc8-9scqq\" (UID: \"6363204f-a019-4617-ae64-9825f87969fa\") " pod="openstack/keystone-59d84c9dc8-9scqq" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.759840 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6363204f-a019-4617-ae64-9825f87969fa-combined-ca-bundle\") pod \"keystone-59d84c9dc8-9scqq\" (UID: \"6363204f-a019-4617-ae64-9825f87969fa\") " pod="openstack/keystone-59d84c9dc8-9scqq" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.772567 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nchm5\" (UniqueName: \"kubernetes.io/projected/6363204f-a019-4617-ae64-9825f87969fa-kube-api-access-nchm5\") pod \"keystone-59d84c9dc8-9scqq\" (UID: \"6363204f-a019-4617-ae64-9825f87969fa\") " pod="openstack/keystone-59d84c9dc8-9scqq" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.779422 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6363204f-a019-4617-ae64-9825f87969fa-scripts\") pod \"keystone-59d84c9dc8-9scqq\" (UID: \"6363204f-a019-4617-ae64-9825f87969fa\") " pod="openstack/keystone-59d84c9dc8-9scqq" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.780075 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6363204f-a019-4617-ae64-9825f87969fa-fernet-keys\") pod \"keystone-59d84c9dc8-9scqq\" (UID: \"6363204f-a019-4617-ae64-9825f87969fa\") " pod="openstack/keystone-59d84c9dc8-9scqq" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.781202 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cc56s"] Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.783082 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cc56s" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.791858 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6363204f-a019-4617-ae64-9825f87969fa-config-data\") pod \"keystone-59d84c9dc8-9scqq\" (UID: \"6363204f-a019-4617-ae64-9825f87969fa\") " pod="openstack/keystone-59d84c9dc8-9scqq" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.805830 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6363204f-a019-4617-ae64-9825f87969fa-credential-keys\") pod \"keystone-59d84c9dc8-9scqq\" (UID: \"6363204f-a019-4617-ae64-9825f87969fa\") " pod="openstack/keystone-59d84c9dc8-9scqq" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.808453 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cc56s"] Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.835962 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.835990 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.847208 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-59d84c9dc8-9scqq" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.885139 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.891314 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.943101 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjk65\" (UniqueName: \"kubernetes.io/projected/be78f3d6-9a68-4858-8d5b-a2fe0ea03050-kube-api-access-wjk65\") pod \"redhat-marketplace-cc56s\" (UID: \"be78f3d6-9a68-4858-8d5b-a2fe0ea03050\") " pod="openshift-marketplace/redhat-marketplace-cc56s" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.943152 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be78f3d6-9a68-4858-8d5b-a2fe0ea03050-catalog-content\") pod \"redhat-marketplace-cc56s\" (UID: \"be78f3d6-9a68-4858-8d5b-a2fe0ea03050\") " pod="openshift-marketplace/redhat-marketplace-cc56s" Jan 30 07:00:50 crc kubenswrapper[4520]: I0130 07:00:50.943226 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be78f3d6-9a68-4858-8d5b-a2fe0ea03050-utilities\") pod \"redhat-marketplace-cc56s\" (UID: \"be78f3d6-9a68-4858-8d5b-a2fe0ea03050\") " pod="openshift-marketplace/redhat-marketplace-cc56s" Jan 30 07:00:51 crc kubenswrapper[4520]: I0130 07:00:51.045992 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be78f3d6-9a68-4858-8d5b-a2fe0ea03050-catalog-content\") pod \"redhat-marketplace-cc56s\" (UID: \"be78f3d6-9a68-4858-8d5b-a2fe0ea03050\") " pod="openshift-marketplace/redhat-marketplace-cc56s" Jan 30 07:00:51 crc kubenswrapper[4520]: I0130 07:00:51.046103 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be78f3d6-9a68-4858-8d5b-a2fe0ea03050-utilities\") pod \"redhat-marketplace-cc56s\" (UID: \"be78f3d6-9a68-4858-8d5b-a2fe0ea03050\") " pod="openshift-marketplace/redhat-marketplace-cc56s" Jan 30 07:00:51 crc kubenswrapper[4520]: I0130 07:00:51.046235 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjk65\" (UniqueName: \"kubernetes.io/projected/be78f3d6-9a68-4858-8d5b-a2fe0ea03050-kube-api-access-wjk65\") pod \"redhat-marketplace-cc56s\" (UID: \"be78f3d6-9a68-4858-8d5b-a2fe0ea03050\") " pod="openshift-marketplace/redhat-marketplace-cc56s" Jan 30 07:00:51 crc kubenswrapper[4520]: I0130 07:00:51.047093 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be78f3d6-9a68-4858-8d5b-a2fe0ea03050-catalog-content\") pod \"redhat-marketplace-cc56s\" (UID: \"be78f3d6-9a68-4858-8d5b-a2fe0ea03050\") " pod="openshift-marketplace/redhat-marketplace-cc56s" Jan 30 07:00:51 crc kubenswrapper[4520]: I0130 07:00:51.047438 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be78f3d6-9a68-4858-8d5b-a2fe0ea03050-utilities\") pod \"redhat-marketplace-cc56s\" (UID: \"be78f3d6-9a68-4858-8d5b-a2fe0ea03050\") " pod="openshift-marketplace/redhat-marketplace-cc56s" Jan 30 07:00:51 crc kubenswrapper[4520]: I0130 07:00:51.078947 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjk65\" (UniqueName: \"kubernetes.io/projected/be78f3d6-9a68-4858-8d5b-a2fe0ea03050-kube-api-access-wjk65\") pod \"redhat-marketplace-cc56s\" (UID: \"be78f3d6-9a68-4858-8d5b-a2fe0ea03050\") " pod="openshift-marketplace/redhat-marketplace-cc56s" Jan 30 07:00:51 crc kubenswrapper[4520]: I0130 07:00:51.195970 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cc56s" Jan 30 07:00:51 crc kubenswrapper[4520]: I0130 07:00:51.696220 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 07:00:51 crc kubenswrapper[4520]: I0130 07:00:51.696452 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 07:00:53 crc kubenswrapper[4520]: I0130 07:00:53.184101 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c8c7b95dc-bv8zz" Jan 30 07:00:53 crc kubenswrapper[4520]: I0130 07:00:53.314287 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726-dns-swift-storage-0\") pod \"e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726\" (UID: \"e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726\") " Jan 30 07:00:53 crc kubenswrapper[4520]: I0130 07:00:53.317773 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7rs4\" (UniqueName: \"kubernetes.io/projected/e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726-kube-api-access-z7rs4\") pod \"e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726\" (UID: \"e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726\") " Jan 30 07:00:53 crc kubenswrapper[4520]: I0130 07:00:53.317804 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726-config\") pod \"e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726\" (UID: \"e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726\") " Jan 30 07:00:53 crc kubenswrapper[4520]: I0130 07:00:53.318186 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726-dns-svc\") pod \"e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726\" (UID: \"e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726\") " Jan 30 07:00:53 crc kubenswrapper[4520]: I0130 07:00:53.318226 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726-ovsdbserver-sb\") pod \"e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726\" (UID: \"e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726\") " Jan 30 07:00:53 crc kubenswrapper[4520]: I0130 07:00:53.318377 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726-ovsdbserver-nb\") pod \"e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726\" (UID: \"e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726\") " Jan 30 07:00:53 crc kubenswrapper[4520]: I0130 07:00:53.352924 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726-kube-api-access-z7rs4" (OuterVolumeSpecName: "kube-api-access-z7rs4") pod "e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726" (UID: "e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726"). InnerVolumeSpecName "kube-api-access-z7rs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:00:53 crc kubenswrapper[4520]: I0130 07:00:53.420143 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726" (UID: "e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:00:53 crc kubenswrapper[4520]: I0130 07:00:53.426564 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7rs4\" (UniqueName: \"kubernetes.io/projected/e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726-kube-api-access-z7rs4\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:53 crc kubenswrapper[4520]: I0130 07:00:53.426599 4520 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:53 crc kubenswrapper[4520]: I0130 07:00:53.429706 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-59d84c9dc8-9scqq"] Jan 30 07:00:53 crc kubenswrapper[4520]: I0130 07:00:53.430237 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726" (UID: "e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:00:53 crc kubenswrapper[4520]: I0130 07:00:53.452157 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726" (UID: "e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:00:53 crc kubenswrapper[4520]: I0130 07:00:53.465983 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726" (UID: "e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:00:53 crc kubenswrapper[4520]: I0130 07:00:53.505105 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726-config" (OuterVolumeSpecName: "config") pod "e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726" (UID: "e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:00:53 crc kubenswrapper[4520]: I0130 07:00:53.527966 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726-config\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:53 crc kubenswrapper[4520]: I0130 07:00:53.527992 4520 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:53 crc kubenswrapper[4520]: I0130 07:00:53.528002 4520 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:53 crc kubenswrapper[4520]: I0130 07:00:53.528011 4520 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:53 crc kubenswrapper[4520]: I0130 07:00:53.548361 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cc56s"] Jan 30 07:00:53 crc kubenswrapper[4520]: I0130 07:00:53.717833 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c8c7b95dc-bv8zz" event={"ID":"e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726","Type":"ContainerDied","Data":"219da46f145d4ea7a0003fdc397076632f5054269f9051726645d4eb58941fda"} Jan 30 07:00:53 crc kubenswrapper[4520]: I0130 07:00:53.718102 4520 scope.go:117] "RemoveContainer" containerID="a90541b3e8d03f9618d7f923b5e099ccb396941d7cdd4949571451bdb9a20917" Jan 30 07:00:53 crc kubenswrapper[4520]: I0130 07:00:53.718247 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c8c7b95dc-bv8zz" Jan 30 07:00:53 crc kubenswrapper[4520]: I0130 07:00:53.737740 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4efe190c-047a-4463-9044-515816c2a7e1","Type":"ContainerStarted","Data":"b126a1dcbdaa0eaa43f16ac1da4cb06c30fc0fc7e894f73eec11a72c209753e8"} Jan 30 07:00:53 crc kubenswrapper[4520]: I0130 07:00:53.739804 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-ld8j2" event={"ID":"77b507ad-cda3-49b8-9a29-4c10ce6c1ac4","Type":"ContainerStarted","Data":"fc36abad5343603ac251f9b179313f05a7b50a287be29781953fa4cbec0660f8"} Jan 30 07:00:53 crc kubenswrapper[4520]: I0130 07:00:53.751350 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-qndsg" event={"ID":"1771d5c5-4904-435a-81ac-80eaaf23bc68","Type":"ContainerStarted","Data":"8c2a47b934cd7fcb72c7ebaab7afee2f34e2c9ffce80e8f1cb99669cbb0bb412"} Jan 30 07:00:53 crc kubenswrapper[4520]: I0130 07:00:53.755858 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c8c7b95dc-bv8zz"] Jan 30 07:00:53 crc kubenswrapper[4520]: I0130 07:00:53.763633 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c8c7b95dc-bv8zz"] Jan 30 07:00:53 crc kubenswrapper[4520]: I0130 07:00:53.764002 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-ld8j2" podStartSLOduration=2.621046183 podStartE2EDuration="58.763986082s" podCreationTimestamp="2026-01-30 06:59:55 +0000 UTC" firstStartedPulling="2026-01-30 06:59:56.89798531 +0000 UTC m=+910.526337481" lastFinishedPulling="2026-01-30 07:00:53.040925199 +0000 UTC m=+966.669277380" observedRunningTime="2026-01-30 07:00:53.763291907 +0000 UTC m=+967.391644079" watchObservedRunningTime="2026-01-30 07:00:53.763986082 +0000 UTC m=+967.392338263" Jan 30 07:00:53 crc kubenswrapper[4520]: I0130 07:00:53.764563 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-59d84c9dc8-9scqq" event={"ID":"6363204f-a019-4617-ae64-9825f87969fa","Type":"ContainerStarted","Data":"a214a79e4fbdf6aa72fb6bf8e49d6bd7f9feb038d99a466f7b8f7dc927a2da64"} Jan 30 07:00:53 crc kubenswrapper[4520]: I0130 07:00:53.765170 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-59d84c9dc8-9scqq" Jan 30 07:00:53 crc kubenswrapper[4520]: I0130 07:00:53.770791 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cc56s" event={"ID":"be78f3d6-9a68-4858-8d5b-a2fe0ea03050","Type":"ContainerStarted","Data":"33fd165f00d9aa97214867245e229de662e05219347dd1233da9579df9e8c08b"} Jan 30 07:00:53 crc kubenswrapper[4520]: I0130 07:00:53.779546 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-qndsg" podStartSLOduration=3.197764225 podStartE2EDuration="59.779533143s" podCreationTimestamp="2026-01-30 06:59:54 +0000 UTC" firstStartedPulling="2026-01-30 06:59:56.466361848 +0000 UTC m=+910.094714019" lastFinishedPulling="2026-01-30 07:00:53.048130756 +0000 UTC m=+966.676482937" observedRunningTime="2026-01-30 07:00:53.779050084 +0000 UTC m=+967.407402265" watchObservedRunningTime="2026-01-30 07:00:53.779533143 +0000 UTC m=+967.407885324" Jan 30 07:00:53 crc kubenswrapper[4520]: I0130 07:00:53.799253 4520 scope.go:117] "RemoveContainer" containerID="664fed1e55e0737a08606e2132270298741a8c14731b75db8e2505debbb55860" Jan 30 07:00:53 crc kubenswrapper[4520]: I0130 07:00:53.821761 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-59d84c9dc8-9scqq" podStartSLOduration=3.821730717 podStartE2EDuration="3.821730717s" podCreationTimestamp="2026-01-30 07:00:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:00:53.80811026 +0000 UTC m=+967.436462440" watchObservedRunningTime="2026-01-30 07:00:53.821730717 +0000 UTC m=+967.450082898" Jan 30 07:00:54 crc kubenswrapper[4520]: I0130 07:00:54.171368 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-c459697cb-g922m" podUID="3380703e-5659-4040-8b43-e3ada0eaa6b6" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 30 07:00:54 crc kubenswrapper[4520]: I0130 07:00:54.372971 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-d9dd85bbd-2g75n" podUID="bcc0bac1-6294-432a-8703-fbef10b2a44f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 30 07:00:54 crc kubenswrapper[4520]: I0130 07:00:54.417282 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 07:00:54 crc kubenswrapper[4520]: I0130 07:00:54.417388 4520 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 07:00:54 crc kubenswrapper[4520]: I0130 07:00:54.422969 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 07:00:54 crc kubenswrapper[4520]: I0130 07:00:54.698547 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726" path="/var/lib/kubelet/pods/e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726/volumes" Jan 30 07:00:54 crc kubenswrapper[4520]: I0130 07:00:54.778576 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xgsxk" event={"ID":"fc2063bc-3a1e-4e9f-badc-299e256a2f3c","Type":"ContainerStarted","Data":"3060b06c3e9bfb4124e00440650a276b127207c3ebb47c4e79baea7996cee5a0"} Jan 30 07:00:54 crc kubenswrapper[4520]: I0130 07:00:54.781224 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-59d84c9dc8-9scqq" event={"ID":"6363204f-a019-4617-ae64-9825f87969fa","Type":"ContainerStarted","Data":"f515c594210d2641ceb3fba843a07ba4a4a62ccb51579e84e624f5d2d342763c"} Jan 30 07:00:54 crc kubenswrapper[4520]: I0130 07:00:54.785057 4520 generic.go:334] "Generic (PLEG): container finished" podID="be78f3d6-9a68-4858-8d5b-a2fe0ea03050" containerID="991ddf1474b1df11e2420824b7776f4f9d84b9f4a5605891a4f6b57d0f46f85c" exitCode=0 Jan 30 07:00:54 crc kubenswrapper[4520]: I0130 07:00:54.785134 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cc56s" event={"ID":"be78f3d6-9a68-4858-8d5b-a2fe0ea03050","Type":"ContainerDied","Data":"991ddf1474b1df11e2420824b7776f4f9d84b9f4a5605891a4f6b57d0f46f85c"} Jan 30 07:00:54 crc kubenswrapper[4520]: I0130 07:00:54.811755 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-xgsxk" podStartSLOduration=4.72037707 podStartE2EDuration="1m0.811742275s" podCreationTimestamp="2026-01-30 06:59:54 +0000 UTC" firstStartedPulling="2026-01-30 06:59:56.957857083 +0000 UTC m=+910.586209264" lastFinishedPulling="2026-01-30 07:00:53.049222288 +0000 UTC m=+966.677574469" observedRunningTime="2026-01-30 07:00:54.808725343 +0000 UTC m=+968.437077524" watchObservedRunningTime="2026-01-30 07:00:54.811742275 +0000 UTC m=+968.440094456" Jan 30 07:00:55 crc kubenswrapper[4520]: I0130 07:00:55.808657 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cc56s" event={"ID":"be78f3d6-9a68-4858-8d5b-a2fe0ea03050","Type":"ContainerStarted","Data":"a673ae0a5fe2790bd626810aa1b095aa245cb369936ccdf6f5202720cc35ac8b"} Jan 30 07:00:56 crc kubenswrapper[4520]: I0130 07:00:56.135000 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rdh5m" Jan 30 07:00:56 crc kubenswrapper[4520]: I0130 07:00:56.180840 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rdh5m" Jan 30 07:00:56 crc kubenswrapper[4520]: I0130 07:00:56.823868 4520 generic.go:334] "Generic (PLEG): container finished" podID="be78f3d6-9a68-4858-8d5b-a2fe0ea03050" containerID="a673ae0a5fe2790bd626810aa1b095aa245cb369936ccdf6f5202720cc35ac8b" exitCode=0 Jan 30 07:00:56 crc kubenswrapper[4520]: I0130 07:00:56.823954 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cc56s" event={"ID":"be78f3d6-9a68-4858-8d5b-a2fe0ea03050","Type":"ContainerDied","Data":"a673ae0a5fe2790bd626810aa1b095aa245cb369936ccdf6f5202720cc35ac8b"} Jan 30 07:00:56 crc kubenswrapper[4520]: I0130 07:00:56.827748 4520 generic.go:334] "Generic (PLEG): container finished" podID="77b507ad-cda3-49b8-9a29-4c10ce6c1ac4" containerID="fc36abad5343603ac251f9b179313f05a7b50a287be29781953fa4cbec0660f8" exitCode=0 Jan 30 07:00:56 crc kubenswrapper[4520]: I0130 07:00:56.828318 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-ld8j2" event={"ID":"77b507ad-cda3-49b8-9a29-4c10ce6c1ac4","Type":"ContainerDied","Data":"fc36abad5343603ac251f9b179313f05a7b50a287be29781953fa4cbec0660f8"} Jan 30 07:00:57 crc kubenswrapper[4520]: I0130 07:00:57.328732 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rdh5m"] Jan 30 07:00:57 crc kubenswrapper[4520]: I0130 07:00:57.701090 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7bb59b888-snb5k" Jan 30 07:00:57 crc kubenswrapper[4520]: I0130 07:00:57.844681 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cc56s" event={"ID":"be78f3d6-9a68-4858-8d5b-a2fe0ea03050","Type":"ContainerStarted","Data":"1a52461fb40eaed91a79eed483675c178cee92381980d36d28f755f46ff0fcfd"} Jan 30 07:00:57 crc kubenswrapper[4520]: I0130 07:00:57.854729 4520 generic.go:334] "Generic (PLEG): container finished" podID="1771d5c5-4904-435a-81ac-80eaaf23bc68" containerID="8c2a47b934cd7fcb72c7ebaab7afee2f34e2c9ffce80e8f1cb99669cbb0bb412" exitCode=0 Jan 30 07:00:57 crc kubenswrapper[4520]: I0130 07:00:57.854875 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-qndsg" event={"ID":"1771d5c5-4904-435a-81ac-80eaaf23bc68","Type":"ContainerDied","Data":"8c2a47b934cd7fcb72c7ebaab7afee2f34e2c9ffce80e8f1cb99669cbb0bb412"} Jan 30 07:00:57 crc kubenswrapper[4520]: I0130 07:00:57.855040 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rdh5m" podUID="390f8ec2-a783-45b8-a1c8-984400c11237" containerName="registry-server" containerID="cri-o://2f5945ee55f097cb3b9297af653fef38d9683cfaa3cac76f84aee0e877d8b98d" gracePeriod=2 Jan 30 07:00:57 crc kubenswrapper[4520]: I0130 07:00:57.876256 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cc56s" podStartSLOduration=5.232354044 podStartE2EDuration="7.876242253s" podCreationTimestamp="2026-01-30 07:00:50 +0000 UTC" firstStartedPulling="2026-01-30 07:00:54.786837052 +0000 UTC m=+968.415189232" lastFinishedPulling="2026-01-30 07:00:57.430725261 +0000 UTC m=+971.059077441" observedRunningTime="2026-01-30 07:00:57.864030184 +0000 UTC m=+971.492382364" watchObservedRunningTime="2026-01-30 07:00:57.876242253 +0000 UTC m=+971.504594434" Jan 30 07:00:57 crc kubenswrapper[4520]: I0130 07:00:57.972349 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7445dc46fc-s424z"] Jan 30 07:00:57 crc kubenswrapper[4520]: I0130 07:00:57.977498 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7445dc46fc-s424z" podUID="07aa3f61-cfcb-4aa2-8430-e4f800dbf572" containerName="neutron-api" containerID="cri-o://c300fc62e1373c388229a82c0d2f920a528002128dc058a25a8b291ab97f13c0" gracePeriod=30 Jan 30 07:00:57 crc kubenswrapper[4520]: I0130 07:00:57.979825 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7445dc46fc-s424z" podUID="07aa3f61-cfcb-4aa2-8430-e4f800dbf572" containerName="neutron-httpd" containerID="cri-o://be31646e606daa8921125c772c609b179e4fdced55dbbd3d1d7da3abaff7801a" gracePeriod=30 Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.011758 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7c56fc575-hzw9q"] Jan 30 07:00:58 crc kubenswrapper[4520]: E0130 07:00:58.012173 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726" containerName="dnsmasq-dns" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.012192 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726" containerName="dnsmasq-dns" Jan 30 07:00:58 crc kubenswrapper[4520]: E0130 07:00:58.012214 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726" containerName="init" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.012220 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726" containerName="init" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.012366 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="e83f5d4e-e09c-49c7-b5a1-e7ec5b0da726" containerName="dnsmasq-dns" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.014159 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c56fc575-hzw9q" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.016637 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7445dc46fc-s424z" podUID="07aa3f61-cfcb-4aa2-8430-e4f800dbf572" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.156:9696/\": EOF" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.036260 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c56fc575-hzw9q"] Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.146284 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db846546-7955-4c19-87aa-188602e349e8-combined-ca-bundle\") pod \"neutron-7c56fc575-hzw9q\" (UID: \"db846546-7955-4c19-87aa-188602e349e8\") " pod="openstack/neutron-7c56fc575-hzw9q" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.146624 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/db846546-7955-4c19-87aa-188602e349e8-config\") pod \"neutron-7c56fc575-hzw9q\" (UID: \"db846546-7955-4c19-87aa-188602e349e8\") " pod="openstack/neutron-7c56fc575-hzw9q" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.146765 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db846546-7955-4c19-87aa-188602e349e8-ovndb-tls-certs\") pod \"neutron-7c56fc575-hzw9q\" (UID: \"db846546-7955-4c19-87aa-188602e349e8\") " pod="openstack/neutron-7c56fc575-hzw9q" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.146831 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrx6j\" (UniqueName: \"kubernetes.io/projected/db846546-7955-4c19-87aa-188602e349e8-kube-api-access-wrx6j\") pod \"neutron-7c56fc575-hzw9q\" (UID: \"db846546-7955-4c19-87aa-188602e349e8\") " pod="openstack/neutron-7c56fc575-hzw9q" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.146936 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db846546-7955-4c19-87aa-188602e349e8-internal-tls-certs\") pod \"neutron-7c56fc575-hzw9q\" (UID: \"db846546-7955-4c19-87aa-188602e349e8\") " pod="openstack/neutron-7c56fc575-hzw9q" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.147003 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/db846546-7955-4c19-87aa-188602e349e8-httpd-config\") pod \"neutron-7c56fc575-hzw9q\" (UID: \"db846546-7955-4c19-87aa-188602e349e8\") " pod="openstack/neutron-7c56fc575-hzw9q" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.147111 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db846546-7955-4c19-87aa-188602e349e8-public-tls-certs\") pod \"neutron-7c56fc575-hzw9q\" (UID: \"db846546-7955-4c19-87aa-188602e349e8\") " pod="openstack/neutron-7c56fc575-hzw9q" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.252441 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db846546-7955-4c19-87aa-188602e349e8-ovndb-tls-certs\") pod \"neutron-7c56fc575-hzw9q\" (UID: \"db846546-7955-4c19-87aa-188602e349e8\") " pod="openstack/neutron-7c56fc575-hzw9q" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.252769 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrx6j\" (UniqueName: \"kubernetes.io/projected/db846546-7955-4c19-87aa-188602e349e8-kube-api-access-wrx6j\") pod \"neutron-7c56fc575-hzw9q\" (UID: \"db846546-7955-4c19-87aa-188602e349e8\") " pod="openstack/neutron-7c56fc575-hzw9q" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.255730 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db846546-7955-4c19-87aa-188602e349e8-internal-tls-certs\") pod \"neutron-7c56fc575-hzw9q\" (UID: \"db846546-7955-4c19-87aa-188602e349e8\") " pod="openstack/neutron-7c56fc575-hzw9q" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.255817 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/db846546-7955-4c19-87aa-188602e349e8-httpd-config\") pod \"neutron-7c56fc575-hzw9q\" (UID: \"db846546-7955-4c19-87aa-188602e349e8\") " pod="openstack/neutron-7c56fc575-hzw9q" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.257554 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db846546-7955-4c19-87aa-188602e349e8-public-tls-certs\") pod \"neutron-7c56fc575-hzw9q\" (UID: \"db846546-7955-4c19-87aa-188602e349e8\") " pod="openstack/neutron-7c56fc575-hzw9q" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.258086 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db846546-7955-4c19-87aa-188602e349e8-combined-ca-bundle\") pod \"neutron-7c56fc575-hzw9q\" (UID: \"db846546-7955-4c19-87aa-188602e349e8\") " pod="openstack/neutron-7c56fc575-hzw9q" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.258226 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/db846546-7955-4c19-87aa-188602e349e8-config\") pod \"neutron-7c56fc575-hzw9q\" (UID: \"db846546-7955-4c19-87aa-188602e349e8\") " pod="openstack/neutron-7c56fc575-hzw9q" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.262958 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db846546-7955-4c19-87aa-188602e349e8-combined-ca-bundle\") pod \"neutron-7c56fc575-hzw9q\" (UID: \"db846546-7955-4c19-87aa-188602e349e8\") " pod="openstack/neutron-7c56fc575-hzw9q" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.268793 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/db846546-7955-4c19-87aa-188602e349e8-config\") pod \"neutron-7c56fc575-hzw9q\" (UID: \"db846546-7955-4c19-87aa-188602e349e8\") " pod="openstack/neutron-7c56fc575-hzw9q" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.275736 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db846546-7955-4c19-87aa-188602e349e8-internal-tls-certs\") pod \"neutron-7c56fc575-hzw9q\" (UID: \"db846546-7955-4c19-87aa-188602e349e8\") " pod="openstack/neutron-7c56fc575-hzw9q" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.306357 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/db846546-7955-4c19-87aa-188602e349e8-httpd-config\") pod \"neutron-7c56fc575-hzw9q\" (UID: \"db846546-7955-4c19-87aa-188602e349e8\") " pod="openstack/neutron-7c56fc575-hzw9q" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.306683 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db846546-7955-4c19-87aa-188602e349e8-public-tls-certs\") pod \"neutron-7c56fc575-hzw9q\" (UID: \"db846546-7955-4c19-87aa-188602e349e8\") " pod="openstack/neutron-7c56fc575-hzw9q" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.308823 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrx6j\" (UniqueName: \"kubernetes.io/projected/db846546-7955-4c19-87aa-188602e349e8-kube-api-access-wrx6j\") pod \"neutron-7c56fc575-hzw9q\" (UID: \"db846546-7955-4c19-87aa-188602e349e8\") " pod="openstack/neutron-7c56fc575-hzw9q" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.311125 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db846546-7955-4c19-87aa-188602e349e8-ovndb-tls-certs\") pod \"neutron-7c56fc575-hzw9q\" (UID: \"db846546-7955-4c19-87aa-188602e349e8\") " pod="openstack/neutron-7c56fc575-hzw9q" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.369367 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c56fc575-hzw9q" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.543190 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-ld8j2" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.634393 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/77b507ad-cda3-49b8-9a29-4c10ce6c1ac4-db-sync-config-data\") pod \"77b507ad-cda3-49b8-9a29-4c10ce6c1ac4\" (UID: \"77b507ad-cda3-49b8-9a29-4c10ce6c1ac4\") " Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.634685 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4z8r7\" (UniqueName: \"kubernetes.io/projected/77b507ad-cda3-49b8-9a29-4c10ce6c1ac4-kube-api-access-4z8r7\") pod \"77b507ad-cda3-49b8-9a29-4c10ce6c1ac4\" (UID: \"77b507ad-cda3-49b8-9a29-4c10ce6c1ac4\") " Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.634791 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b507ad-cda3-49b8-9a29-4c10ce6c1ac4-combined-ca-bundle\") pod \"77b507ad-cda3-49b8-9a29-4c10ce6c1ac4\" (UID: \"77b507ad-cda3-49b8-9a29-4c10ce6c1ac4\") " Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.657712 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77b507ad-cda3-49b8-9a29-4c10ce6c1ac4-kube-api-access-4z8r7" (OuterVolumeSpecName: "kube-api-access-4z8r7") pod "77b507ad-cda3-49b8-9a29-4c10ce6c1ac4" (UID: "77b507ad-cda3-49b8-9a29-4c10ce6c1ac4"). InnerVolumeSpecName "kube-api-access-4z8r7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.665739 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rdh5m" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.696043 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77b507ad-cda3-49b8-9a29-4c10ce6c1ac4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "77b507ad-cda3-49b8-9a29-4c10ce6c1ac4" (UID: "77b507ad-cda3-49b8-9a29-4c10ce6c1ac4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.746204 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77b507ad-cda3-49b8-9a29-4c10ce6c1ac4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77b507ad-cda3-49b8-9a29-4c10ce6c1ac4" (UID: "77b507ad-cda3-49b8-9a29-4c10ce6c1ac4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.762173 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4n86\" (UniqueName: \"kubernetes.io/projected/390f8ec2-a783-45b8-a1c8-984400c11237-kube-api-access-g4n86\") pod \"390f8ec2-a783-45b8-a1c8-984400c11237\" (UID: \"390f8ec2-a783-45b8-a1c8-984400c11237\") " Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.765224 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/390f8ec2-a783-45b8-a1c8-984400c11237-catalog-content\") pod \"390f8ec2-a783-45b8-a1c8-984400c11237\" (UID: \"390f8ec2-a783-45b8-a1c8-984400c11237\") " Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.766903 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b507ad-cda3-49b8-9a29-4c10ce6c1ac4-combined-ca-bundle\") pod \"77b507ad-cda3-49b8-9a29-4c10ce6c1ac4\" (UID: \"77b507ad-cda3-49b8-9a29-4c10ce6c1ac4\") " Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.767095 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/390f8ec2-a783-45b8-a1c8-984400c11237-utilities\") pod \"390f8ec2-a783-45b8-a1c8-984400c11237\" (UID: \"390f8ec2-a783-45b8-a1c8-984400c11237\") " Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.767827 4520 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/77b507ad-cda3-49b8-9a29-4c10ce6c1ac4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.767847 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4z8r7\" (UniqueName: \"kubernetes.io/projected/77b507ad-cda3-49b8-9a29-4c10ce6c1ac4-kube-api-access-4z8r7\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:58 crc kubenswrapper[4520]: W0130 07:00:58.768414 4520 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/77b507ad-cda3-49b8-9a29-4c10ce6c1ac4/volumes/kubernetes.io~secret/combined-ca-bundle Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.768435 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77b507ad-cda3-49b8-9a29-4c10ce6c1ac4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77b507ad-cda3-49b8-9a29-4c10ce6c1ac4" (UID: "77b507ad-cda3-49b8-9a29-4c10ce6c1ac4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.776117 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/390f8ec2-a783-45b8-a1c8-984400c11237-utilities" (OuterVolumeSpecName: "utilities") pod "390f8ec2-a783-45b8-a1c8-984400c11237" (UID: "390f8ec2-a783-45b8-a1c8-984400c11237"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.783741 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/390f8ec2-a783-45b8-a1c8-984400c11237-kube-api-access-g4n86" (OuterVolumeSpecName: "kube-api-access-g4n86") pod "390f8ec2-a783-45b8-a1c8-984400c11237" (UID: "390f8ec2-a783-45b8-a1c8-984400c11237"). InnerVolumeSpecName "kube-api-access-g4n86". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.834459 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/390f8ec2-a783-45b8-a1c8-984400c11237-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "390f8ec2-a783-45b8-a1c8-984400c11237" (UID: "390f8ec2-a783-45b8-a1c8-984400c11237"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.871996 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/390f8ec2-a783-45b8-a1c8-984400c11237-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.872035 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4n86\" (UniqueName: \"kubernetes.io/projected/390f8ec2-a783-45b8-a1c8-984400c11237-kube-api-access-g4n86\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.872049 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/390f8ec2-a783-45b8-a1c8-984400c11237-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.872074 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b507ad-cda3-49b8-9a29-4c10ce6c1ac4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.892863 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-ld8j2" event={"ID":"77b507ad-cda3-49b8-9a29-4c10ce6c1ac4","Type":"ContainerDied","Data":"fc68f1711e690b7b0fb339f9e6aec250a87486d49ac6a686216bb5752eac0d5e"} Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.892919 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc68f1711e690b7b0fb339f9e6aec250a87486d49ac6a686216bb5752eac0d5e" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.893062 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-ld8j2" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.920383 4520 generic.go:334] "Generic (PLEG): container finished" podID="390f8ec2-a783-45b8-a1c8-984400c11237" containerID="2f5945ee55f097cb3b9297af653fef38d9683cfaa3cac76f84aee0e877d8b98d" exitCode=0 Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.920456 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rdh5m" event={"ID":"390f8ec2-a783-45b8-a1c8-984400c11237","Type":"ContainerDied","Data":"2f5945ee55f097cb3b9297af653fef38d9683cfaa3cac76f84aee0e877d8b98d"} Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.920485 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rdh5m" event={"ID":"390f8ec2-a783-45b8-a1c8-984400c11237","Type":"ContainerDied","Data":"661c16f6d42a8ddbaba506d28036a5de14eaa583d1bfe9e0d8fddfba8343be26"} Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.920504 4520 scope.go:117] "RemoveContainer" containerID="2f5945ee55f097cb3b9297af653fef38d9683cfaa3cac76f84aee0e877d8b98d" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.920659 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rdh5m" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.936397 4520 generic.go:334] "Generic (PLEG): container finished" podID="fc2063bc-3a1e-4e9f-badc-299e256a2f3c" containerID="3060b06c3e9bfb4124e00440650a276b127207c3ebb47c4e79baea7996cee5a0" exitCode=0 Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.936530 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xgsxk" event={"ID":"fc2063bc-3a1e-4e9f-badc-299e256a2f3c","Type":"ContainerDied","Data":"3060b06c3e9bfb4124e00440650a276b127207c3ebb47c4e79baea7996cee5a0"} Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.950238 4520 generic.go:334] "Generic (PLEG): container finished" podID="07aa3f61-cfcb-4aa2-8430-e4f800dbf572" containerID="be31646e606daa8921125c772c609b179e4fdced55dbbd3d1d7da3abaff7801a" exitCode=0 Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.950563 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7445dc46fc-s424z" event={"ID":"07aa3f61-cfcb-4aa2-8430-e4f800dbf572","Type":"ContainerDied","Data":"be31646e606daa8921125c772c609b179e4fdced55dbbd3d1d7da3abaff7801a"} Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.981837 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rdh5m"] Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.982178 4520 scope.go:117] "RemoveContainer" containerID="5a96c4375a563abd82dddeb2825a0ff6b64801afb06bbf05306d40bdf9ec7020" Jan 30 07:00:58 crc kubenswrapper[4520]: I0130 07:00:58.989490 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rdh5m"] Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.021257 4520 scope.go:117] "RemoveContainer" containerID="dfb14464303b8474f7c69d5100feb7870f7a8788c3a286c193bafd881288c6be" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.094985 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-68cd6684c9-j8kr8"] Jan 30 07:00:59 crc kubenswrapper[4520]: E0130 07:00:59.097013 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="390f8ec2-a783-45b8-a1c8-984400c11237" containerName="extract-utilities" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.097035 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="390f8ec2-a783-45b8-a1c8-984400c11237" containerName="extract-utilities" Jan 30 07:00:59 crc kubenswrapper[4520]: E0130 07:00:59.097050 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77b507ad-cda3-49b8-9a29-4c10ce6c1ac4" containerName="barbican-db-sync" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.097057 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="77b507ad-cda3-49b8-9a29-4c10ce6c1ac4" containerName="barbican-db-sync" Jan 30 07:00:59 crc kubenswrapper[4520]: E0130 07:00:59.097078 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="390f8ec2-a783-45b8-a1c8-984400c11237" containerName="extract-content" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.097085 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="390f8ec2-a783-45b8-a1c8-984400c11237" containerName="extract-content" Jan 30 07:00:59 crc kubenswrapper[4520]: E0130 07:00:59.097101 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="390f8ec2-a783-45b8-a1c8-984400c11237" containerName="registry-server" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.097107 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="390f8ec2-a783-45b8-a1c8-984400c11237" containerName="registry-server" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.097277 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="390f8ec2-a783-45b8-a1c8-984400c11237" containerName="registry-server" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.097294 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="77b507ad-cda3-49b8-9a29-4c10ce6c1ac4" containerName="barbican-db-sync" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.098821 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-68cd6684c9-j8kr8" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.109014 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.109208 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-kt86f" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.109789 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.139850 4520 scope.go:117] "RemoveContainer" containerID="2f5945ee55f097cb3b9297af653fef38d9683cfaa3cac76f84aee0e877d8b98d" Jan 30 07:00:59 crc kubenswrapper[4520]: E0130 07:00:59.142594 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f5945ee55f097cb3b9297af653fef38d9683cfaa3cac76f84aee0e877d8b98d\": container with ID starting with 2f5945ee55f097cb3b9297af653fef38d9683cfaa3cac76f84aee0e877d8b98d not found: ID does not exist" containerID="2f5945ee55f097cb3b9297af653fef38d9683cfaa3cac76f84aee0e877d8b98d" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.142638 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f5945ee55f097cb3b9297af653fef38d9683cfaa3cac76f84aee0e877d8b98d"} err="failed to get container status \"2f5945ee55f097cb3b9297af653fef38d9683cfaa3cac76f84aee0e877d8b98d\": rpc error: code = NotFound desc = could not find container \"2f5945ee55f097cb3b9297af653fef38d9683cfaa3cac76f84aee0e877d8b98d\": container with ID starting with 2f5945ee55f097cb3b9297af653fef38d9683cfaa3cac76f84aee0e877d8b98d not found: ID does not exist" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.142675 4520 scope.go:117] "RemoveContainer" containerID="5a96c4375a563abd82dddeb2825a0ff6b64801afb06bbf05306d40bdf9ec7020" Jan 30 07:00:59 crc kubenswrapper[4520]: E0130 07:00:59.145142 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a96c4375a563abd82dddeb2825a0ff6b64801afb06bbf05306d40bdf9ec7020\": container with ID starting with 5a96c4375a563abd82dddeb2825a0ff6b64801afb06bbf05306d40bdf9ec7020 not found: ID does not exist" containerID="5a96c4375a563abd82dddeb2825a0ff6b64801afb06bbf05306d40bdf9ec7020" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.145184 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a96c4375a563abd82dddeb2825a0ff6b64801afb06bbf05306d40bdf9ec7020"} err="failed to get container status \"5a96c4375a563abd82dddeb2825a0ff6b64801afb06bbf05306d40bdf9ec7020\": rpc error: code = NotFound desc = could not find container \"5a96c4375a563abd82dddeb2825a0ff6b64801afb06bbf05306d40bdf9ec7020\": container with ID starting with 5a96c4375a563abd82dddeb2825a0ff6b64801afb06bbf05306d40bdf9ec7020 not found: ID does not exist" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.145213 4520 scope.go:117] "RemoveContainer" containerID="dfb14464303b8474f7c69d5100feb7870f7a8788c3a286c193bafd881288c6be" Jan 30 07:00:59 crc kubenswrapper[4520]: E0130 07:00:59.147706 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfb14464303b8474f7c69d5100feb7870f7a8788c3a286c193bafd881288c6be\": container with ID starting with dfb14464303b8474f7c69d5100feb7870f7a8788c3a286c193bafd881288c6be not found: ID does not exist" containerID="dfb14464303b8474f7c69d5100feb7870f7a8788c3a286c193bafd881288c6be" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.147741 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfb14464303b8474f7c69d5100feb7870f7a8788c3a286c193bafd881288c6be"} err="failed to get container status \"dfb14464303b8474f7c69d5100feb7870f7a8788c3a286c193bafd881288c6be\": rpc error: code = NotFound desc = could not find container \"dfb14464303b8474f7c69d5100feb7870f7a8788c3a286c193bafd881288c6be\": container with ID starting with dfb14464303b8474f7c69d5100feb7870f7a8788c3a286c193bafd881288c6be not found: ID does not exist" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.154926 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-68cd6684c9-j8kr8"] Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.187058 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b9415b7-ddcf-40e8-b404-51911e38b5c7-logs\") pod \"barbican-worker-68cd6684c9-j8kr8\" (UID: \"7b9415b7-ddcf-40e8-b404-51911e38b5c7\") " pod="openstack/barbican-worker-68cd6684c9-j8kr8" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.187110 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqgvt\" (UniqueName: \"kubernetes.io/projected/7b9415b7-ddcf-40e8-b404-51911e38b5c7-kube-api-access-jqgvt\") pod \"barbican-worker-68cd6684c9-j8kr8\" (UID: \"7b9415b7-ddcf-40e8-b404-51911e38b5c7\") " pod="openstack/barbican-worker-68cd6684c9-j8kr8" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.187215 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b9415b7-ddcf-40e8-b404-51911e38b5c7-config-data\") pod \"barbican-worker-68cd6684c9-j8kr8\" (UID: \"7b9415b7-ddcf-40e8-b404-51911e38b5c7\") " pod="openstack/barbican-worker-68cd6684c9-j8kr8" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.187236 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b9415b7-ddcf-40e8-b404-51911e38b5c7-combined-ca-bundle\") pod \"barbican-worker-68cd6684c9-j8kr8\" (UID: \"7b9415b7-ddcf-40e8-b404-51911e38b5c7\") " pod="openstack/barbican-worker-68cd6684c9-j8kr8" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.187282 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b9415b7-ddcf-40e8-b404-51911e38b5c7-config-data-custom\") pod \"barbican-worker-68cd6684c9-j8kr8\" (UID: \"7b9415b7-ddcf-40e8-b404-51911e38b5c7\") " pod="openstack/barbican-worker-68cd6684c9-j8kr8" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.226962 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5c5d4f857d-ww6k4"] Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.228737 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5c5d4f857d-ww6k4" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.243920 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.247350 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5c5d4f857d-ww6k4"] Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.271565 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bfcf6757f-bv4bw"] Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.273211 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bfcf6757f-bv4bw" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.278912 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bfcf6757f-bv4bw"] Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.289805 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eed2b222-c964-4e11-914d-e3f45b8b4b02-config-data\") pod \"barbican-keystone-listener-5c5d4f857d-ww6k4\" (UID: \"eed2b222-c964-4e11-914d-e3f45b8b4b02\") " pod="openstack/barbican-keystone-listener-5c5d4f857d-ww6k4" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.289842 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eed2b222-c964-4e11-914d-e3f45b8b4b02-logs\") pod \"barbican-keystone-listener-5c5d4f857d-ww6k4\" (UID: \"eed2b222-c964-4e11-914d-e3f45b8b4b02\") " pod="openstack/barbican-keystone-listener-5c5d4f857d-ww6k4" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.289991 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b9415b7-ddcf-40e8-b404-51911e38b5c7-logs\") pod \"barbican-worker-68cd6684c9-j8kr8\" (UID: \"7b9415b7-ddcf-40e8-b404-51911e38b5c7\") " pod="openstack/barbican-worker-68cd6684c9-j8kr8" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.290019 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqgvt\" (UniqueName: \"kubernetes.io/projected/7b9415b7-ddcf-40e8-b404-51911e38b5c7-kube-api-access-jqgvt\") pod \"barbican-worker-68cd6684c9-j8kr8\" (UID: \"7b9415b7-ddcf-40e8-b404-51911e38b5c7\") " pod="openstack/barbican-worker-68cd6684c9-j8kr8" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.290091 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eed2b222-c964-4e11-914d-e3f45b8b4b02-combined-ca-bundle\") pod \"barbican-keystone-listener-5c5d4f857d-ww6k4\" (UID: \"eed2b222-c964-4e11-914d-e3f45b8b4b02\") " pod="openstack/barbican-keystone-listener-5c5d4f857d-ww6k4" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.290136 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b9415b7-ddcf-40e8-b404-51911e38b5c7-config-data\") pod \"barbican-worker-68cd6684c9-j8kr8\" (UID: \"7b9415b7-ddcf-40e8-b404-51911e38b5c7\") " pod="openstack/barbican-worker-68cd6684c9-j8kr8" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.290154 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b9415b7-ddcf-40e8-b404-51911e38b5c7-combined-ca-bundle\") pod \"barbican-worker-68cd6684c9-j8kr8\" (UID: \"7b9415b7-ddcf-40e8-b404-51911e38b5c7\") " pod="openstack/barbican-worker-68cd6684c9-j8kr8" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.290195 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eed2b222-c964-4e11-914d-e3f45b8b4b02-config-data-custom\") pod \"barbican-keystone-listener-5c5d4f857d-ww6k4\" (UID: \"eed2b222-c964-4e11-914d-e3f45b8b4b02\") " pod="openstack/barbican-keystone-listener-5c5d4f857d-ww6k4" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.290216 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs97g\" (UniqueName: \"kubernetes.io/projected/eed2b222-c964-4e11-914d-e3f45b8b4b02-kube-api-access-vs97g\") pod \"barbican-keystone-listener-5c5d4f857d-ww6k4\" (UID: \"eed2b222-c964-4e11-914d-e3f45b8b4b02\") " pod="openstack/barbican-keystone-listener-5c5d4f857d-ww6k4" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.290249 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b9415b7-ddcf-40e8-b404-51911e38b5c7-config-data-custom\") pod \"barbican-worker-68cd6684c9-j8kr8\" (UID: \"7b9415b7-ddcf-40e8-b404-51911e38b5c7\") " pod="openstack/barbican-worker-68cd6684c9-j8kr8" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.292035 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b9415b7-ddcf-40e8-b404-51911e38b5c7-logs\") pod \"barbican-worker-68cd6684c9-j8kr8\" (UID: \"7b9415b7-ddcf-40e8-b404-51911e38b5c7\") " pod="openstack/barbican-worker-68cd6684c9-j8kr8" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.306967 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b9415b7-ddcf-40e8-b404-51911e38b5c7-combined-ca-bundle\") pod \"barbican-worker-68cd6684c9-j8kr8\" (UID: \"7b9415b7-ddcf-40e8-b404-51911e38b5c7\") " pod="openstack/barbican-worker-68cd6684c9-j8kr8" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.313089 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b9415b7-ddcf-40e8-b404-51911e38b5c7-config-data\") pod \"barbican-worker-68cd6684c9-j8kr8\" (UID: \"7b9415b7-ddcf-40e8-b404-51911e38b5c7\") " pod="openstack/barbican-worker-68cd6684c9-j8kr8" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.313498 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b9415b7-ddcf-40e8-b404-51911e38b5c7-config-data-custom\") pod \"barbican-worker-68cd6684c9-j8kr8\" (UID: \"7b9415b7-ddcf-40e8-b404-51911e38b5c7\") " pod="openstack/barbican-worker-68cd6684c9-j8kr8" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.332286 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqgvt\" (UniqueName: \"kubernetes.io/projected/7b9415b7-ddcf-40e8-b404-51911e38b5c7-kube-api-access-jqgvt\") pod \"barbican-worker-68cd6684c9-j8kr8\" (UID: \"7b9415b7-ddcf-40e8-b404-51911e38b5c7\") " pod="openstack/barbican-worker-68cd6684c9-j8kr8" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.335362 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-68cd6684c9-j8kr8" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.400368 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eed2b222-c964-4e11-914d-e3f45b8b4b02-config-data\") pod \"barbican-keystone-listener-5c5d4f857d-ww6k4\" (UID: \"eed2b222-c964-4e11-914d-e3f45b8b4b02\") " pod="openstack/barbican-keystone-listener-5c5d4f857d-ww6k4" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.400415 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eed2b222-c964-4e11-914d-e3f45b8b4b02-logs\") pod \"barbican-keystone-listener-5c5d4f857d-ww6k4\" (UID: \"eed2b222-c964-4e11-914d-e3f45b8b4b02\") " pod="openstack/barbican-keystone-listener-5c5d4f857d-ww6k4" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.400481 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da8fcfab-9e74-40b5-87a2-a771a93c64e3-dns-swift-storage-0\") pod \"dnsmasq-dns-6bfcf6757f-bv4bw\" (UID: \"da8fcfab-9e74-40b5-87a2-a771a93c64e3\") " pod="openstack/dnsmasq-dns-6bfcf6757f-bv4bw" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.400524 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da8fcfab-9e74-40b5-87a2-a771a93c64e3-ovsdbserver-sb\") pod \"dnsmasq-dns-6bfcf6757f-bv4bw\" (UID: \"da8fcfab-9e74-40b5-87a2-a771a93c64e3\") " pod="openstack/dnsmasq-dns-6bfcf6757f-bv4bw" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.400641 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da8fcfab-9e74-40b5-87a2-a771a93c64e3-config\") pod \"dnsmasq-dns-6bfcf6757f-bv4bw\" (UID: \"da8fcfab-9e74-40b5-87a2-a771a93c64e3\") " pod="openstack/dnsmasq-dns-6bfcf6757f-bv4bw" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.400698 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eed2b222-c964-4e11-914d-e3f45b8b4b02-combined-ca-bundle\") pod \"barbican-keystone-listener-5c5d4f857d-ww6k4\" (UID: \"eed2b222-c964-4e11-914d-e3f45b8b4b02\") " pod="openstack/barbican-keystone-listener-5c5d4f857d-ww6k4" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.400720 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da8fcfab-9e74-40b5-87a2-a771a93c64e3-dns-svc\") pod \"dnsmasq-dns-6bfcf6757f-bv4bw\" (UID: \"da8fcfab-9e74-40b5-87a2-a771a93c64e3\") " pod="openstack/dnsmasq-dns-6bfcf6757f-bv4bw" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.400749 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da8fcfab-9e74-40b5-87a2-a771a93c64e3-ovsdbserver-nb\") pod \"dnsmasq-dns-6bfcf6757f-bv4bw\" (UID: \"da8fcfab-9e74-40b5-87a2-a771a93c64e3\") " pod="openstack/dnsmasq-dns-6bfcf6757f-bv4bw" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.400811 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eed2b222-c964-4e11-914d-e3f45b8b4b02-config-data-custom\") pod \"barbican-keystone-listener-5c5d4f857d-ww6k4\" (UID: \"eed2b222-c964-4e11-914d-e3f45b8b4b02\") " pod="openstack/barbican-keystone-listener-5c5d4f857d-ww6k4" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.400835 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs97g\" (UniqueName: \"kubernetes.io/projected/eed2b222-c964-4e11-914d-e3f45b8b4b02-kube-api-access-vs97g\") pod \"barbican-keystone-listener-5c5d4f857d-ww6k4\" (UID: \"eed2b222-c964-4e11-914d-e3f45b8b4b02\") " pod="openstack/barbican-keystone-listener-5c5d4f857d-ww6k4" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.400879 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh7nd\" (UniqueName: \"kubernetes.io/projected/da8fcfab-9e74-40b5-87a2-a771a93c64e3-kube-api-access-hh7nd\") pod \"dnsmasq-dns-6bfcf6757f-bv4bw\" (UID: \"da8fcfab-9e74-40b5-87a2-a771a93c64e3\") " pod="openstack/dnsmasq-dns-6bfcf6757f-bv4bw" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.401207 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eed2b222-c964-4e11-914d-e3f45b8b4b02-logs\") pod \"barbican-keystone-listener-5c5d4f857d-ww6k4\" (UID: \"eed2b222-c964-4e11-914d-e3f45b8b4b02\") " pod="openstack/barbican-keystone-listener-5c5d4f857d-ww6k4" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.414681 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eed2b222-c964-4e11-914d-e3f45b8b4b02-config-data-custom\") pod \"barbican-keystone-listener-5c5d4f857d-ww6k4\" (UID: \"eed2b222-c964-4e11-914d-e3f45b8b4b02\") " pod="openstack/barbican-keystone-listener-5c5d4f857d-ww6k4" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.421441 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eed2b222-c964-4e11-914d-e3f45b8b4b02-config-data\") pod \"barbican-keystone-listener-5c5d4f857d-ww6k4\" (UID: \"eed2b222-c964-4e11-914d-e3f45b8b4b02\") " pod="openstack/barbican-keystone-listener-5c5d4f857d-ww6k4" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.423939 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eed2b222-c964-4e11-914d-e3f45b8b4b02-combined-ca-bundle\") pod \"barbican-keystone-listener-5c5d4f857d-ww6k4\" (UID: \"eed2b222-c964-4e11-914d-e3f45b8b4b02\") " pod="openstack/barbican-keystone-listener-5c5d4f857d-ww6k4" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.432591 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs97g\" (UniqueName: \"kubernetes.io/projected/eed2b222-c964-4e11-914d-e3f45b8b4b02-kube-api-access-vs97g\") pod \"barbican-keystone-listener-5c5d4f857d-ww6k4\" (UID: \"eed2b222-c964-4e11-914d-e3f45b8b4b02\") " pod="openstack/barbican-keystone-listener-5c5d4f857d-ww6k4" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.463947 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c56fc575-hzw9q"] Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.479614 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-547b6f779b-dz8tp"] Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.481001 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-547b6f779b-dz8tp" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.482908 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-547b6f779b-dz8tp"] Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.498180 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.503211 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da8fcfab-9e74-40b5-87a2-a771a93c64e3-dns-swift-storage-0\") pod \"dnsmasq-dns-6bfcf6757f-bv4bw\" (UID: \"da8fcfab-9e74-40b5-87a2-a771a93c64e3\") " pod="openstack/dnsmasq-dns-6bfcf6757f-bv4bw" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.503259 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da8fcfab-9e74-40b5-87a2-a771a93c64e3-ovsdbserver-sb\") pod \"dnsmasq-dns-6bfcf6757f-bv4bw\" (UID: \"da8fcfab-9e74-40b5-87a2-a771a93c64e3\") " pod="openstack/dnsmasq-dns-6bfcf6757f-bv4bw" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.503383 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da8fcfab-9e74-40b5-87a2-a771a93c64e3-config\") pod \"dnsmasq-dns-6bfcf6757f-bv4bw\" (UID: \"da8fcfab-9e74-40b5-87a2-a771a93c64e3\") " pod="openstack/dnsmasq-dns-6bfcf6757f-bv4bw" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.503462 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da8fcfab-9e74-40b5-87a2-a771a93c64e3-dns-svc\") pod \"dnsmasq-dns-6bfcf6757f-bv4bw\" (UID: \"da8fcfab-9e74-40b5-87a2-a771a93c64e3\") " pod="openstack/dnsmasq-dns-6bfcf6757f-bv4bw" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.503502 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da8fcfab-9e74-40b5-87a2-a771a93c64e3-ovsdbserver-nb\") pod \"dnsmasq-dns-6bfcf6757f-bv4bw\" (UID: \"da8fcfab-9e74-40b5-87a2-a771a93c64e3\") " pod="openstack/dnsmasq-dns-6bfcf6757f-bv4bw" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.503617 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh7nd\" (UniqueName: \"kubernetes.io/projected/da8fcfab-9e74-40b5-87a2-a771a93c64e3-kube-api-access-hh7nd\") pod \"dnsmasq-dns-6bfcf6757f-bv4bw\" (UID: \"da8fcfab-9e74-40b5-87a2-a771a93c64e3\") " pod="openstack/dnsmasq-dns-6bfcf6757f-bv4bw" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.505307 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da8fcfab-9e74-40b5-87a2-a771a93c64e3-dns-swift-storage-0\") pod \"dnsmasq-dns-6bfcf6757f-bv4bw\" (UID: \"da8fcfab-9e74-40b5-87a2-a771a93c64e3\") " pod="openstack/dnsmasq-dns-6bfcf6757f-bv4bw" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.506290 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da8fcfab-9e74-40b5-87a2-a771a93c64e3-ovsdbserver-sb\") pod \"dnsmasq-dns-6bfcf6757f-bv4bw\" (UID: \"da8fcfab-9e74-40b5-87a2-a771a93c64e3\") " pod="openstack/dnsmasq-dns-6bfcf6757f-bv4bw" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.506585 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da8fcfab-9e74-40b5-87a2-a771a93c64e3-dns-svc\") pod \"dnsmasq-dns-6bfcf6757f-bv4bw\" (UID: \"da8fcfab-9e74-40b5-87a2-a771a93c64e3\") " pod="openstack/dnsmasq-dns-6bfcf6757f-bv4bw" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.507133 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da8fcfab-9e74-40b5-87a2-a771a93c64e3-config\") pod \"dnsmasq-dns-6bfcf6757f-bv4bw\" (UID: \"da8fcfab-9e74-40b5-87a2-a771a93c64e3\") " pod="openstack/dnsmasq-dns-6bfcf6757f-bv4bw" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.524007 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da8fcfab-9e74-40b5-87a2-a771a93c64e3-ovsdbserver-nb\") pod \"dnsmasq-dns-6bfcf6757f-bv4bw\" (UID: \"da8fcfab-9e74-40b5-87a2-a771a93c64e3\") " pod="openstack/dnsmasq-dns-6bfcf6757f-bv4bw" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.565465 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh7nd\" (UniqueName: \"kubernetes.io/projected/da8fcfab-9e74-40b5-87a2-a771a93c64e3-kube-api-access-hh7nd\") pod \"dnsmasq-dns-6bfcf6757f-bv4bw\" (UID: \"da8fcfab-9e74-40b5-87a2-a771a93c64e3\") " pod="openstack/dnsmasq-dns-6bfcf6757f-bv4bw" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.607240 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c478fd5-c0c9-4959-8b1f-69b89aa24932-config-data\") pod \"barbican-api-547b6f779b-dz8tp\" (UID: \"8c478fd5-c0c9-4959-8b1f-69b89aa24932\") " pod="openstack/barbican-api-547b6f779b-dz8tp" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.607568 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c478fd5-c0c9-4959-8b1f-69b89aa24932-config-data-custom\") pod \"barbican-api-547b6f779b-dz8tp\" (UID: \"8c478fd5-c0c9-4959-8b1f-69b89aa24932\") " pod="openstack/barbican-api-547b6f779b-dz8tp" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.607712 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvskx\" (UniqueName: \"kubernetes.io/projected/8c478fd5-c0c9-4959-8b1f-69b89aa24932-kube-api-access-hvskx\") pod \"barbican-api-547b6f779b-dz8tp\" (UID: \"8c478fd5-c0c9-4959-8b1f-69b89aa24932\") " pod="openstack/barbican-api-547b6f779b-dz8tp" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.607836 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c478fd5-c0c9-4959-8b1f-69b89aa24932-logs\") pod \"barbican-api-547b6f779b-dz8tp\" (UID: \"8c478fd5-c0c9-4959-8b1f-69b89aa24932\") " pod="openstack/barbican-api-547b6f779b-dz8tp" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.607921 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c478fd5-c0c9-4959-8b1f-69b89aa24932-combined-ca-bundle\") pod \"barbican-api-547b6f779b-dz8tp\" (UID: \"8c478fd5-c0c9-4959-8b1f-69b89aa24932\") " pod="openstack/barbican-api-547b6f779b-dz8tp" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.686932 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5c5d4f857d-ww6k4" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.697858 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bfcf6757f-bv4bw" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.708066 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-qndsg" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.710153 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c478fd5-c0c9-4959-8b1f-69b89aa24932-logs\") pod \"barbican-api-547b6f779b-dz8tp\" (UID: \"8c478fd5-c0c9-4959-8b1f-69b89aa24932\") " pod="openstack/barbican-api-547b6f779b-dz8tp" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.710260 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c478fd5-c0c9-4959-8b1f-69b89aa24932-combined-ca-bundle\") pod \"barbican-api-547b6f779b-dz8tp\" (UID: \"8c478fd5-c0c9-4959-8b1f-69b89aa24932\") " pod="openstack/barbican-api-547b6f779b-dz8tp" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.710409 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c478fd5-c0c9-4959-8b1f-69b89aa24932-config-data\") pod \"barbican-api-547b6f779b-dz8tp\" (UID: \"8c478fd5-c0c9-4959-8b1f-69b89aa24932\") " pod="openstack/barbican-api-547b6f779b-dz8tp" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.710615 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c478fd5-c0c9-4959-8b1f-69b89aa24932-config-data-custom\") pod \"barbican-api-547b6f779b-dz8tp\" (UID: \"8c478fd5-c0c9-4959-8b1f-69b89aa24932\") " pod="openstack/barbican-api-547b6f779b-dz8tp" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.710731 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvskx\" (UniqueName: \"kubernetes.io/projected/8c478fd5-c0c9-4959-8b1f-69b89aa24932-kube-api-access-hvskx\") pod \"barbican-api-547b6f779b-dz8tp\" (UID: \"8c478fd5-c0c9-4959-8b1f-69b89aa24932\") " pod="openstack/barbican-api-547b6f779b-dz8tp" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.711289 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c478fd5-c0c9-4959-8b1f-69b89aa24932-logs\") pod \"barbican-api-547b6f779b-dz8tp\" (UID: \"8c478fd5-c0c9-4959-8b1f-69b89aa24932\") " pod="openstack/barbican-api-547b6f779b-dz8tp" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.730122 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c478fd5-c0c9-4959-8b1f-69b89aa24932-config-data\") pod \"barbican-api-547b6f779b-dz8tp\" (UID: \"8c478fd5-c0c9-4959-8b1f-69b89aa24932\") " pod="openstack/barbican-api-547b6f779b-dz8tp" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.734989 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvskx\" (UniqueName: \"kubernetes.io/projected/8c478fd5-c0c9-4959-8b1f-69b89aa24932-kube-api-access-hvskx\") pod \"barbican-api-547b6f779b-dz8tp\" (UID: \"8c478fd5-c0c9-4959-8b1f-69b89aa24932\") " pod="openstack/barbican-api-547b6f779b-dz8tp" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.740994 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c478fd5-c0c9-4959-8b1f-69b89aa24932-combined-ca-bundle\") pod \"barbican-api-547b6f779b-dz8tp\" (UID: \"8c478fd5-c0c9-4959-8b1f-69b89aa24932\") " pod="openstack/barbican-api-547b6f779b-dz8tp" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.791035 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7445dc46fc-s424z" podUID="07aa3f61-cfcb-4aa2-8430-e4f800dbf572" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.156:9696/\": dial tcp 10.217.0.156:9696: connect: connection refused" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.791232 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c478fd5-c0c9-4959-8b1f-69b89aa24932-config-data-custom\") pod \"barbican-api-547b6f779b-dz8tp\" (UID: \"8c478fd5-c0c9-4959-8b1f-69b89aa24932\") " pod="openstack/barbican-api-547b6f779b-dz8tp" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.817593 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7zpp\" (UniqueName: \"kubernetes.io/projected/1771d5c5-4904-435a-81ac-80eaaf23bc68-kube-api-access-c7zpp\") pod \"1771d5c5-4904-435a-81ac-80eaaf23bc68\" (UID: \"1771d5c5-4904-435a-81ac-80eaaf23bc68\") " Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.817728 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1771d5c5-4904-435a-81ac-80eaaf23bc68-combined-ca-bundle\") pod \"1771d5c5-4904-435a-81ac-80eaaf23bc68\" (UID: \"1771d5c5-4904-435a-81ac-80eaaf23bc68\") " Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.817836 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1771d5c5-4904-435a-81ac-80eaaf23bc68-config-data\") pod \"1771d5c5-4904-435a-81ac-80eaaf23bc68\" (UID: \"1771d5c5-4904-435a-81ac-80eaaf23bc68\") " Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.836926 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1771d5c5-4904-435a-81ac-80eaaf23bc68-kube-api-access-c7zpp" (OuterVolumeSpecName: "kube-api-access-c7zpp") pod "1771d5c5-4904-435a-81ac-80eaaf23bc68" (UID: "1771d5c5-4904-435a-81ac-80eaaf23bc68"). InnerVolumeSpecName "kube-api-access-c7zpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.894617 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1771d5c5-4904-435a-81ac-80eaaf23bc68-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1771d5c5-4904-435a-81ac-80eaaf23bc68" (UID: "1771d5c5-4904-435a-81ac-80eaaf23bc68"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.927032 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7zpp\" (UniqueName: \"kubernetes.io/projected/1771d5c5-4904-435a-81ac-80eaaf23bc68-kube-api-access-c7zpp\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:59 crc kubenswrapper[4520]: I0130 07:00:59.927056 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1771d5c5-4904-435a-81ac-80eaaf23bc68-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.013354 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-547b6f779b-dz8tp" Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.035557 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1771d5c5-4904-435a-81ac-80eaaf23bc68-config-data" (OuterVolumeSpecName: "config-data") pod "1771d5c5-4904-435a-81ac-80eaaf23bc68" (UID: "1771d5c5-4904-435a-81ac-80eaaf23bc68"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.041016 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1771d5c5-4904-435a-81ac-80eaaf23bc68-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.046298 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c56fc575-hzw9q" event={"ID":"db846546-7955-4c19-87aa-188602e349e8","Type":"ContainerStarted","Data":"8921dfc3e11781d685332b13442680b75ab1cb831349b43ddfc8b2906c3aca19"} Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.046338 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c56fc575-hzw9q" event={"ID":"db846546-7955-4c19-87aa-188602e349e8","Type":"ContainerStarted","Data":"bb9f8bb9d8fac7f38faeec805ff3f4222289b54998d38976784db23ab326fb2f"} Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.076452 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-qndsg" Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.077719 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-qndsg" event={"ID":"1771d5c5-4904-435a-81ac-80eaaf23bc68","Type":"ContainerDied","Data":"50b703e09a192b0738dc936337e63cc6423f80880832bc5fa0432c923ace9add"} Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.077768 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50b703e09a192b0738dc936337e63cc6423f80880832bc5fa0432c923ace9add" Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.080538 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-68cd6684c9-j8kr8"] Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.191261 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29495941-kdn5b"] Jan 30 07:01:00 crc kubenswrapper[4520]: E0130 07:01:00.192041 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1771d5c5-4904-435a-81ac-80eaaf23bc68" containerName="heat-db-sync" Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.192062 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="1771d5c5-4904-435a-81ac-80eaaf23bc68" containerName="heat-db-sync" Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.192302 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="1771d5c5-4904-435a-81ac-80eaaf23bc68" containerName="heat-db-sync" Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.193063 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29495941-kdn5b" Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.241853 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29495941-kdn5b"] Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.250565 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aa70617-4bf1-4555-886c-988e24cd5198-combined-ca-bundle\") pod \"keystone-cron-29495941-kdn5b\" (UID: \"5aa70617-4bf1-4555-886c-988e24cd5198\") " pod="openstack/keystone-cron-29495941-kdn5b" Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.250615 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n54zb\" (UniqueName: \"kubernetes.io/projected/5aa70617-4bf1-4555-886c-988e24cd5198-kube-api-access-n54zb\") pod \"keystone-cron-29495941-kdn5b\" (UID: \"5aa70617-4bf1-4555-886c-988e24cd5198\") " pod="openstack/keystone-cron-29495941-kdn5b" Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.250695 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5aa70617-4bf1-4555-886c-988e24cd5198-fernet-keys\") pod \"keystone-cron-29495941-kdn5b\" (UID: \"5aa70617-4bf1-4555-886c-988e24cd5198\") " pod="openstack/keystone-cron-29495941-kdn5b" Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.250738 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aa70617-4bf1-4555-886c-988e24cd5198-config-data\") pod \"keystone-cron-29495941-kdn5b\" (UID: \"5aa70617-4bf1-4555-886c-988e24cd5198\") " pod="openstack/keystone-cron-29495941-kdn5b" Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.352055 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aa70617-4bf1-4555-886c-988e24cd5198-config-data\") pod \"keystone-cron-29495941-kdn5b\" (UID: \"5aa70617-4bf1-4555-886c-988e24cd5198\") " pod="openstack/keystone-cron-29495941-kdn5b" Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.352172 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aa70617-4bf1-4555-886c-988e24cd5198-combined-ca-bundle\") pod \"keystone-cron-29495941-kdn5b\" (UID: \"5aa70617-4bf1-4555-886c-988e24cd5198\") " pod="openstack/keystone-cron-29495941-kdn5b" Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.352206 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n54zb\" (UniqueName: \"kubernetes.io/projected/5aa70617-4bf1-4555-886c-988e24cd5198-kube-api-access-n54zb\") pod \"keystone-cron-29495941-kdn5b\" (UID: \"5aa70617-4bf1-4555-886c-988e24cd5198\") " pod="openstack/keystone-cron-29495941-kdn5b" Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.352277 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5aa70617-4bf1-4555-886c-988e24cd5198-fernet-keys\") pod \"keystone-cron-29495941-kdn5b\" (UID: \"5aa70617-4bf1-4555-886c-988e24cd5198\") " pod="openstack/keystone-cron-29495941-kdn5b" Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.362035 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aa70617-4bf1-4555-886c-988e24cd5198-combined-ca-bundle\") pod \"keystone-cron-29495941-kdn5b\" (UID: \"5aa70617-4bf1-4555-886c-988e24cd5198\") " pod="openstack/keystone-cron-29495941-kdn5b" Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.364157 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5aa70617-4bf1-4555-886c-988e24cd5198-fernet-keys\") pod \"keystone-cron-29495941-kdn5b\" (UID: \"5aa70617-4bf1-4555-886c-988e24cd5198\") " pod="openstack/keystone-cron-29495941-kdn5b" Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.365344 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aa70617-4bf1-4555-886c-988e24cd5198-config-data\") pod \"keystone-cron-29495941-kdn5b\" (UID: \"5aa70617-4bf1-4555-886c-988e24cd5198\") " pod="openstack/keystone-cron-29495941-kdn5b" Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.368458 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bfcf6757f-bv4bw"] Jan 30 07:01:00 crc kubenswrapper[4520]: W0130 07:01:00.382774 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda8fcfab_9e74_40b5_87a2_a771a93c64e3.slice/crio-cdf36bce3e9b807b6c60694765274c6958cc0425fbac225031566cdd909c5924 WatchSource:0}: Error finding container cdf36bce3e9b807b6c60694765274c6958cc0425fbac225031566cdd909c5924: Status 404 returned error can't find the container with id cdf36bce3e9b807b6c60694765274c6958cc0425fbac225031566cdd909c5924 Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.396311 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n54zb\" (UniqueName: \"kubernetes.io/projected/5aa70617-4bf1-4555-886c-988e24cd5198-kube-api-access-n54zb\") pod \"keystone-cron-29495941-kdn5b\" (UID: \"5aa70617-4bf1-4555-886c-988e24cd5198\") " pod="openstack/keystone-cron-29495941-kdn5b" Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.519154 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5c5d4f857d-ww6k4"] Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.549618 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29495941-kdn5b" Jan 30 07:01:00 crc kubenswrapper[4520]: W0130 07:01:00.549998 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeed2b222_c964_4e11_914d_e3f45b8b4b02.slice/crio-7dc6ca7c7261a02fff4736dcf4ea5b5b87e5684922d6fb81e9bbaccbdcc21749 WatchSource:0}: Error finding container 7dc6ca7c7261a02fff4736dcf4ea5b5b87e5684922d6fb81e9bbaccbdcc21749: Status 404 returned error can't find the container with id 7dc6ca7c7261a02fff4736dcf4ea5b5b87e5684922d6fb81e9bbaccbdcc21749 Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.705556 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="390f8ec2-a783-45b8-a1c8-984400c11237" path="/var/lib/kubelet/pods/390f8ec2-a783-45b8-a1c8-984400c11237/volumes" Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.812861 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-547b6f779b-dz8tp"] Jan 30 07:01:00 crc kubenswrapper[4520]: W0130 07:01:00.823768 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c478fd5_c0c9_4959_8b1f_69b89aa24932.slice/crio-e326497d93780513dd2d1f5150ade3b1429c7489ef31b50c0a4003263b113f0f WatchSource:0}: Error finding container e326497d93780513dd2d1f5150ade3b1429c7489ef31b50c0a4003263b113f0f: Status 404 returned error can't find the container with id e326497d93780513dd2d1f5150ade3b1429c7489ef31b50c0a4003263b113f0f Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.914715 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xgsxk" Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.979473 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc2063bc-3a1e-4e9f-badc-299e256a2f3c-combined-ca-bundle\") pod \"fc2063bc-3a1e-4e9f-badc-299e256a2f3c\" (UID: \"fc2063bc-3a1e-4e9f-badc-299e256a2f3c\") " Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.979806 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fc2063bc-3a1e-4e9f-badc-299e256a2f3c-db-sync-config-data\") pod \"fc2063bc-3a1e-4e9f-badc-299e256a2f3c\" (UID: \"fc2063bc-3a1e-4e9f-badc-299e256a2f3c\") " Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.979869 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfrzl\" (UniqueName: \"kubernetes.io/projected/fc2063bc-3a1e-4e9f-badc-299e256a2f3c-kube-api-access-mfrzl\") pod \"fc2063bc-3a1e-4e9f-badc-299e256a2f3c\" (UID: \"fc2063bc-3a1e-4e9f-badc-299e256a2f3c\") " Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.979964 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc2063bc-3a1e-4e9f-badc-299e256a2f3c-config-data\") pod \"fc2063bc-3a1e-4e9f-badc-299e256a2f3c\" (UID: \"fc2063bc-3a1e-4e9f-badc-299e256a2f3c\") " Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.980109 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fc2063bc-3a1e-4e9f-badc-299e256a2f3c-etc-machine-id\") pod \"fc2063bc-3a1e-4e9f-badc-299e256a2f3c\" (UID: \"fc2063bc-3a1e-4e9f-badc-299e256a2f3c\") " Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.980129 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc2063bc-3a1e-4e9f-badc-299e256a2f3c-scripts\") pod \"fc2063bc-3a1e-4e9f-badc-299e256a2f3c\" (UID: \"fc2063bc-3a1e-4e9f-badc-299e256a2f3c\") " Jan 30 07:01:00 crc kubenswrapper[4520]: I0130 07:01:00.987479 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc2063bc-3a1e-4e9f-badc-299e256a2f3c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "fc2063bc-3a1e-4e9f-badc-299e256a2f3c" (UID: "fc2063bc-3a1e-4e9f-badc-299e256a2f3c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 07:01:01 crc kubenswrapper[4520]: I0130 07:01:01.003640 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc2063bc-3a1e-4e9f-badc-299e256a2f3c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "fc2063bc-3a1e-4e9f-badc-299e256a2f3c" (UID: "fc2063bc-3a1e-4e9f-badc-299e256a2f3c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:01 crc kubenswrapper[4520]: I0130 07:01:01.004563 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc2063bc-3a1e-4e9f-badc-299e256a2f3c-kube-api-access-mfrzl" (OuterVolumeSpecName: "kube-api-access-mfrzl") pod "fc2063bc-3a1e-4e9f-badc-299e256a2f3c" (UID: "fc2063bc-3a1e-4e9f-badc-299e256a2f3c"). InnerVolumeSpecName "kube-api-access-mfrzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:01:01 crc kubenswrapper[4520]: I0130 07:01:01.004634 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc2063bc-3a1e-4e9f-badc-299e256a2f3c-scripts" (OuterVolumeSpecName: "scripts") pod "fc2063bc-3a1e-4e9f-badc-299e256a2f3c" (UID: "fc2063bc-3a1e-4e9f-badc-299e256a2f3c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:01 crc kubenswrapper[4520]: I0130 07:01:01.050719 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc2063bc-3a1e-4e9f-badc-299e256a2f3c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc2063bc-3a1e-4e9f-badc-299e256a2f3c" (UID: "fc2063bc-3a1e-4e9f-badc-299e256a2f3c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:01 crc kubenswrapper[4520]: I0130 07:01:01.084827 4520 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fc2063bc-3a1e-4e9f-badc-299e256a2f3c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:01 crc kubenswrapper[4520]: I0130 07:01:01.084853 4520 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc2063bc-3a1e-4e9f-badc-299e256a2f3c-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:01 crc kubenswrapper[4520]: I0130 07:01:01.084862 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc2063bc-3a1e-4e9f-badc-299e256a2f3c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:01 crc kubenswrapper[4520]: I0130 07:01:01.084872 4520 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fc2063bc-3a1e-4e9f-badc-299e256a2f3c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:01 crc kubenswrapper[4520]: I0130 07:01:01.084882 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfrzl\" (UniqueName: \"kubernetes.io/projected/fc2063bc-3a1e-4e9f-badc-299e256a2f3c-kube-api-access-mfrzl\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:01 crc kubenswrapper[4520]: I0130 07:01:01.102566 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xgsxk" event={"ID":"fc2063bc-3a1e-4e9f-badc-299e256a2f3c","Type":"ContainerDied","Data":"54132e99aee4f3da29ffe09eb2bd79bfdd3f16b50756229842c09cee4ab334fc"} Jan 30 07:01:01 crc kubenswrapper[4520]: I0130 07:01:01.102606 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54132e99aee4f3da29ffe09eb2bd79bfdd3f16b50756229842c09cee4ab334fc" Jan 30 07:01:01 crc kubenswrapper[4520]: I0130 07:01:01.102678 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xgsxk" Jan 30 07:01:01 crc kubenswrapper[4520]: I0130 07:01:01.114719 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c56fc575-hzw9q" event={"ID":"db846546-7955-4c19-87aa-188602e349e8","Type":"ContainerStarted","Data":"9bf6cad630f2aefe0033f8aaa1af66013c93557e1dc8702f77ed55d6477522df"} Jan 30 07:01:01 crc kubenswrapper[4520]: I0130 07:01:01.114864 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7c56fc575-hzw9q" Jan 30 07:01:01 crc kubenswrapper[4520]: I0130 07:01:01.125613 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bfcf6757f-bv4bw" event={"ID":"da8fcfab-9e74-40b5-87a2-a771a93c64e3","Type":"ContainerStarted","Data":"cdf36bce3e9b807b6c60694765274c6958cc0425fbac225031566cdd909c5924"} Jan 30 07:01:01 crc kubenswrapper[4520]: I0130 07:01:01.136568 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-547b6f779b-dz8tp" event={"ID":"8c478fd5-c0c9-4959-8b1f-69b89aa24932","Type":"ContainerStarted","Data":"e326497d93780513dd2d1f5150ade3b1429c7489ef31b50c0a4003263b113f0f"} Jan 30 07:01:01 crc kubenswrapper[4520]: I0130 07:01:01.144766 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc2063bc-3a1e-4e9f-badc-299e256a2f3c-config-data" (OuterVolumeSpecName: "config-data") pod "fc2063bc-3a1e-4e9f-badc-299e256a2f3c" (UID: "fc2063bc-3a1e-4e9f-badc-299e256a2f3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:01 crc kubenswrapper[4520]: I0130 07:01:01.148699 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c5d4f857d-ww6k4" event={"ID":"eed2b222-c964-4e11-914d-e3f45b8b4b02","Type":"ContainerStarted","Data":"7dc6ca7c7261a02fff4736dcf4ea5b5b87e5684922d6fb81e9bbaccbdcc21749"} Jan 30 07:01:01 crc kubenswrapper[4520]: I0130 07:01:01.159059 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68cd6684c9-j8kr8" event={"ID":"7b9415b7-ddcf-40e8-b404-51911e38b5c7","Type":"ContainerStarted","Data":"797f160765d0593f946923e9abec6e629939a4bb74983a63a7566ed0262c5a35"} Jan 30 07:01:01 crc kubenswrapper[4520]: I0130 07:01:01.191654 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc2063bc-3a1e-4e9f-badc-299e256a2f3c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:01 crc kubenswrapper[4520]: I0130 07:01:01.197680 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cc56s" Jan 30 07:01:01 crc kubenswrapper[4520]: I0130 07:01:01.197726 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cc56s" Jan 30 07:01:01 crc kubenswrapper[4520]: I0130 07:01:01.198851 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7c56fc575-hzw9q" podStartSLOduration=4.198830694 podStartE2EDuration="4.198830694s" podCreationTimestamp="2026-01-30 07:00:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:01:01.156263606 +0000 UTC m=+974.784615786" watchObservedRunningTime="2026-01-30 07:01:01.198830694 +0000 UTC m=+974.827182875" Jan 30 07:01:01 crc kubenswrapper[4520]: I0130 07:01:01.397398 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29495941-kdn5b"] Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.192618 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29495941-kdn5b" event={"ID":"5aa70617-4bf1-4555-886c-988e24cd5198","Type":"ContainerStarted","Data":"14775b45de2129e581da9cf84e11fec78e0c0154ddc3ea973b4c44124cd49c98"} Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.194200 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29495941-kdn5b" event={"ID":"5aa70617-4bf1-4555-886c-988e24cd5198","Type":"ContainerStarted","Data":"5180b31a864360e0c4dc14112c08d79569971e3d921dfbb7b52fa423f7fd1060"} Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.197437 4520 generic.go:334] "Generic (PLEG): container finished" podID="da8fcfab-9e74-40b5-87a2-a771a93c64e3" containerID="5d2646d1e5c77117451b3b0398e8dae26c360102516e5822630ce22929d349b3" exitCode=0 Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.197565 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bfcf6757f-bv4bw" event={"ID":"da8fcfab-9e74-40b5-87a2-a771a93c64e3","Type":"ContainerDied","Data":"5d2646d1e5c77117451b3b0398e8dae26c360102516e5822630ce22929d349b3"} Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.254982 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-547b6f779b-dz8tp" event={"ID":"8c478fd5-c0c9-4959-8b1f-69b89aa24932","Type":"ContainerStarted","Data":"9abf0917b7cbc5c42092d54a6b476db185df50cc70c046577c3acc101542d581"} Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.255020 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-547b6f779b-dz8tp" event={"ID":"8c478fd5-c0c9-4959-8b1f-69b89aa24932","Type":"ContainerStarted","Data":"1281963ebd91f55ec78c917f7564f3630c56884448a25ec9f50e96dbd8a292c5"} Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.255036 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-547b6f779b-dz8tp" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.255057 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-547b6f779b-dz8tp" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.279774 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29495941-kdn5b" podStartSLOduration=2.279759013 podStartE2EDuration="2.279759013s" podCreationTimestamp="2026-01-30 07:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:01:02.245763459 +0000 UTC m=+975.874115639" watchObservedRunningTime="2026-01-30 07:01:02.279759013 +0000 UTC m=+975.908111193" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.301232 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-cc56s" podUID="be78f3d6-9a68-4858-8d5b-a2fe0ea03050" containerName="registry-server" probeResult="failure" output=< Jan 30 07:01:02 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 07:01:02 crc kubenswrapper[4520]: > Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.305736 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 07:01:02 crc kubenswrapper[4520]: E0130 07:01:02.306154 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc2063bc-3a1e-4e9f-badc-299e256a2f3c" containerName="cinder-db-sync" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.306169 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc2063bc-3a1e-4e9f-badc-299e256a2f3c" containerName="cinder-db-sync" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.315805 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc2063bc-3a1e-4e9f-badc-299e256a2f3c" containerName="cinder-db-sync" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.316967 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.325574 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.358405 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.359525 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-fj7s6" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.360777 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.364970 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.458798 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-547b6f779b-dz8tp" podStartSLOduration=3.4587780710000002 podStartE2EDuration="3.458778071s" podCreationTimestamp="2026-01-30 07:00:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:01:02.400075926 +0000 UTC m=+976.028428108" watchObservedRunningTime="2026-01-30 07:01:02.458778071 +0000 UTC m=+976.087130241" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.477813 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ede99291-73df-453d-80f2-3e4744245bb4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ede99291-73df-453d-80f2-3e4744245bb4\") " pod="openstack/cinder-scheduler-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.477872 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ede99291-73df-453d-80f2-3e4744245bb4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ede99291-73df-453d-80f2-3e4744245bb4\") " pod="openstack/cinder-scheduler-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.477920 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtrft\" (UniqueName: \"kubernetes.io/projected/ede99291-73df-453d-80f2-3e4744245bb4-kube-api-access-jtrft\") pod \"cinder-scheduler-0\" (UID: \"ede99291-73df-453d-80f2-3e4744245bb4\") " pod="openstack/cinder-scheduler-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.478011 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ede99291-73df-453d-80f2-3e4744245bb4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ede99291-73df-453d-80f2-3e4744245bb4\") " pod="openstack/cinder-scheduler-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.478196 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ede99291-73df-453d-80f2-3e4744245bb4-config-data\") pod \"cinder-scheduler-0\" (UID: \"ede99291-73df-453d-80f2-3e4744245bb4\") " pod="openstack/cinder-scheduler-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.478256 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ede99291-73df-453d-80f2-3e4744245bb4-scripts\") pod \"cinder-scheduler-0\" (UID: \"ede99291-73df-453d-80f2-3e4744245bb4\") " pod="openstack/cinder-scheduler-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.538550 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bfcf6757f-bv4bw"] Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.576942 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c548f5455-gc5z9"] Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.578421 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c548f5455-gc5z9" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.581552 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ede99291-73df-453d-80f2-3e4744245bb4-scripts\") pod \"cinder-scheduler-0\" (UID: \"ede99291-73df-453d-80f2-3e4744245bb4\") " pod="openstack/cinder-scheduler-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.581658 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ede99291-73df-453d-80f2-3e4744245bb4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ede99291-73df-453d-80f2-3e4744245bb4\") " pod="openstack/cinder-scheduler-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.581681 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ede99291-73df-453d-80f2-3e4744245bb4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ede99291-73df-453d-80f2-3e4744245bb4\") " pod="openstack/cinder-scheduler-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.581702 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtrft\" (UniqueName: \"kubernetes.io/projected/ede99291-73df-453d-80f2-3e4744245bb4-kube-api-access-jtrft\") pod \"cinder-scheduler-0\" (UID: \"ede99291-73df-453d-80f2-3e4744245bb4\") " pod="openstack/cinder-scheduler-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.581745 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ede99291-73df-453d-80f2-3e4744245bb4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ede99291-73df-453d-80f2-3e4744245bb4\") " pod="openstack/cinder-scheduler-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.581810 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ede99291-73df-453d-80f2-3e4744245bb4-config-data\") pod \"cinder-scheduler-0\" (UID: \"ede99291-73df-453d-80f2-3e4744245bb4\") " pod="openstack/cinder-scheduler-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.581958 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ede99291-73df-453d-80f2-3e4744245bb4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ede99291-73df-453d-80f2-3e4744245bb4\") " pod="openstack/cinder-scheduler-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.592250 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c548f5455-gc5z9"] Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.667061 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ede99291-73df-453d-80f2-3e4744245bb4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ede99291-73df-453d-80f2-3e4744245bb4\") " pod="openstack/cinder-scheduler-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.667260 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ede99291-73df-453d-80f2-3e4744245bb4-config-data\") pod \"cinder-scheduler-0\" (UID: \"ede99291-73df-453d-80f2-3e4744245bb4\") " pod="openstack/cinder-scheduler-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.675637 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.680573 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.684302 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/885d7c94-3859-4ab4-a1e1-203588ca6f3c-dns-swift-storage-0\") pod \"dnsmasq-dns-6c548f5455-gc5z9\" (UID: \"885d7c94-3859-4ab4-a1e1-203588ca6f3c\") " pod="openstack/dnsmasq-dns-6c548f5455-gc5z9" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.684390 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/885d7c94-3859-4ab4-a1e1-203588ca6f3c-ovsdbserver-nb\") pod \"dnsmasq-dns-6c548f5455-gc5z9\" (UID: \"885d7c94-3859-4ab4-a1e1-203588ca6f3c\") " pod="openstack/dnsmasq-dns-6c548f5455-gc5z9" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.684422 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/885d7c94-3859-4ab4-a1e1-203588ca6f3c-config\") pod \"dnsmasq-dns-6c548f5455-gc5z9\" (UID: \"885d7c94-3859-4ab4-a1e1-203588ca6f3c\") " pod="openstack/dnsmasq-dns-6c548f5455-gc5z9" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.684477 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/885d7c94-3859-4ab4-a1e1-203588ca6f3c-dns-svc\") pod \"dnsmasq-dns-6c548f5455-gc5z9\" (UID: \"885d7c94-3859-4ab4-a1e1-203588ca6f3c\") " pod="openstack/dnsmasq-dns-6c548f5455-gc5z9" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.684538 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/885d7c94-3859-4ab4-a1e1-203588ca6f3c-ovsdbserver-sb\") pod \"dnsmasq-dns-6c548f5455-gc5z9\" (UID: \"885d7c94-3859-4ab4-a1e1-203588ca6f3c\") " pod="openstack/dnsmasq-dns-6c548f5455-gc5z9" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.684567 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svcg9\" (UniqueName: \"kubernetes.io/projected/885d7c94-3859-4ab4-a1e1-203588ca6f3c-kube-api-access-svcg9\") pod \"dnsmasq-dns-6c548f5455-gc5z9\" (UID: \"885d7c94-3859-4ab4-a1e1-203588ca6f3c\") " pod="openstack/dnsmasq-dns-6c548f5455-gc5z9" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.684738 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.700338 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ede99291-73df-453d-80f2-3e4744245bb4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ede99291-73df-453d-80f2-3e4744245bb4\") " pod="openstack/cinder-scheduler-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.700991 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ede99291-73df-453d-80f2-3e4744245bb4-scripts\") pod \"cinder-scheduler-0\" (UID: \"ede99291-73df-453d-80f2-3e4744245bb4\") " pod="openstack/cinder-scheduler-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.708000 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtrft\" (UniqueName: \"kubernetes.io/projected/ede99291-73df-453d-80f2-3e4744245bb4-kube-api-access-jtrft\") pod \"cinder-scheduler-0\" (UID: \"ede99291-73df-453d-80f2-3e4744245bb4\") " pod="openstack/cinder-scheduler-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.710243 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.788312 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/885d7c94-3859-4ab4-a1e1-203588ca6f3c-ovsdbserver-sb\") pod \"dnsmasq-dns-6c548f5455-gc5z9\" (UID: \"885d7c94-3859-4ab4-a1e1-203588ca6f3c\") " pod="openstack/dnsmasq-dns-6c548f5455-gc5z9" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.788762 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/885d7c94-3859-4ab4-a1e1-203588ca6f3c-ovsdbserver-sb\") pod \"dnsmasq-dns-6c548f5455-gc5z9\" (UID: \"885d7c94-3859-4ab4-a1e1-203588ca6f3c\") " pod="openstack/dnsmasq-dns-6c548f5455-gc5z9" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.788872 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66e1918d-216f-47bb-abb8-3b9cf0c772e2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"66e1918d-216f-47bb-abb8-3b9cf0c772e2\") " pod="openstack/cinder-api-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.789006 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svcg9\" (UniqueName: \"kubernetes.io/projected/885d7c94-3859-4ab4-a1e1-203588ca6f3c-kube-api-access-svcg9\") pod \"dnsmasq-dns-6c548f5455-gc5z9\" (UID: \"885d7c94-3859-4ab4-a1e1-203588ca6f3c\") " pod="openstack/dnsmasq-dns-6c548f5455-gc5z9" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.789111 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/885d7c94-3859-4ab4-a1e1-203588ca6f3c-dns-swift-storage-0\") pod \"dnsmasq-dns-6c548f5455-gc5z9\" (UID: \"885d7c94-3859-4ab4-a1e1-203588ca6f3c\") " pod="openstack/dnsmasq-dns-6c548f5455-gc5z9" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.789202 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/66e1918d-216f-47bb-abb8-3b9cf0c772e2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"66e1918d-216f-47bb-abb8-3b9cf0c772e2\") " pod="openstack/cinder-api-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.789299 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66e1918d-216f-47bb-abb8-3b9cf0c772e2-scripts\") pod \"cinder-api-0\" (UID: \"66e1918d-216f-47bb-abb8-3b9cf0c772e2\") " pod="openstack/cinder-api-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.789398 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66e1918d-216f-47bb-abb8-3b9cf0c772e2-config-data-custom\") pod \"cinder-api-0\" (UID: \"66e1918d-216f-47bb-abb8-3b9cf0c772e2\") " pod="openstack/cinder-api-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.789479 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66e1918d-216f-47bb-abb8-3b9cf0c772e2-logs\") pod \"cinder-api-0\" (UID: \"66e1918d-216f-47bb-abb8-3b9cf0c772e2\") " pod="openstack/cinder-api-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.790120 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/885d7c94-3859-4ab4-a1e1-203588ca6f3c-dns-swift-storage-0\") pod \"dnsmasq-dns-6c548f5455-gc5z9\" (UID: \"885d7c94-3859-4ab4-a1e1-203588ca6f3c\") " pod="openstack/dnsmasq-dns-6c548f5455-gc5z9" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.790244 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66e1918d-216f-47bb-abb8-3b9cf0c772e2-config-data\") pod \"cinder-api-0\" (UID: \"66e1918d-216f-47bb-abb8-3b9cf0c772e2\") " pod="openstack/cinder-api-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.790367 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/885d7c94-3859-4ab4-a1e1-203588ca6f3c-ovsdbserver-nb\") pod \"dnsmasq-dns-6c548f5455-gc5z9\" (UID: \"885d7c94-3859-4ab4-a1e1-203588ca6f3c\") " pod="openstack/dnsmasq-dns-6c548f5455-gc5z9" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.790825 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/885d7c94-3859-4ab4-a1e1-203588ca6f3c-config\") pod \"dnsmasq-dns-6c548f5455-gc5z9\" (UID: \"885d7c94-3859-4ab4-a1e1-203588ca6f3c\") " pod="openstack/dnsmasq-dns-6c548f5455-gc5z9" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.791556 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrlzz\" (UniqueName: \"kubernetes.io/projected/66e1918d-216f-47bb-abb8-3b9cf0c772e2-kube-api-access-nrlzz\") pod \"cinder-api-0\" (UID: \"66e1918d-216f-47bb-abb8-3b9cf0c772e2\") " pod="openstack/cinder-api-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.791477 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/885d7c94-3859-4ab4-a1e1-203588ca6f3c-config\") pod \"dnsmasq-dns-6c548f5455-gc5z9\" (UID: \"885d7c94-3859-4ab4-a1e1-203588ca6f3c\") " pod="openstack/dnsmasq-dns-6c548f5455-gc5z9" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.791167 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/885d7c94-3859-4ab4-a1e1-203588ca6f3c-ovsdbserver-nb\") pod \"dnsmasq-dns-6c548f5455-gc5z9\" (UID: \"885d7c94-3859-4ab4-a1e1-203588ca6f3c\") " pod="openstack/dnsmasq-dns-6c548f5455-gc5z9" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.791907 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/885d7c94-3859-4ab4-a1e1-203588ca6f3c-dns-svc\") pod \"dnsmasq-dns-6c548f5455-gc5z9\" (UID: \"885d7c94-3859-4ab4-a1e1-203588ca6f3c\") " pod="openstack/dnsmasq-dns-6c548f5455-gc5z9" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.792405 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/885d7c94-3859-4ab4-a1e1-203588ca6f3c-dns-svc\") pod \"dnsmasq-dns-6c548f5455-gc5z9\" (UID: \"885d7c94-3859-4ab4-a1e1-203588ca6f3c\") " pod="openstack/dnsmasq-dns-6c548f5455-gc5z9" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.812963 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svcg9\" (UniqueName: \"kubernetes.io/projected/885d7c94-3859-4ab4-a1e1-203588ca6f3c-kube-api-access-svcg9\") pod \"dnsmasq-dns-6c548f5455-gc5z9\" (UID: \"885d7c94-3859-4ab4-a1e1-203588ca6f3c\") " pod="openstack/dnsmasq-dns-6c548f5455-gc5z9" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.894315 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrlzz\" (UniqueName: \"kubernetes.io/projected/66e1918d-216f-47bb-abb8-3b9cf0c772e2-kube-api-access-nrlzz\") pod \"cinder-api-0\" (UID: \"66e1918d-216f-47bb-abb8-3b9cf0c772e2\") " pod="openstack/cinder-api-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.894702 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66e1918d-216f-47bb-abb8-3b9cf0c772e2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"66e1918d-216f-47bb-abb8-3b9cf0c772e2\") " pod="openstack/cinder-api-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.894859 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/66e1918d-216f-47bb-abb8-3b9cf0c772e2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"66e1918d-216f-47bb-abb8-3b9cf0c772e2\") " pod="openstack/cinder-api-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.894944 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66e1918d-216f-47bb-abb8-3b9cf0c772e2-scripts\") pod \"cinder-api-0\" (UID: \"66e1918d-216f-47bb-abb8-3b9cf0c772e2\") " pod="openstack/cinder-api-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.895052 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66e1918d-216f-47bb-abb8-3b9cf0c772e2-config-data-custom\") pod \"cinder-api-0\" (UID: \"66e1918d-216f-47bb-abb8-3b9cf0c772e2\") " pod="openstack/cinder-api-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.895144 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66e1918d-216f-47bb-abb8-3b9cf0c772e2-logs\") pod \"cinder-api-0\" (UID: \"66e1918d-216f-47bb-abb8-3b9cf0c772e2\") " pod="openstack/cinder-api-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.895311 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66e1918d-216f-47bb-abb8-3b9cf0c772e2-config-data\") pod \"cinder-api-0\" (UID: \"66e1918d-216f-47bb-abb8-3b9cf0c772e2\") " pod="openstack/cinder-api-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.896616 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/66e1918d-216f-47bb-abb8-3b9cf0c772e2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"66e1918d-216f-47bb-abb8-3b9cf0c772e2\") " pod="openstack/cinder-api-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.896793 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66e1918d-216f-47bb-abb8-3b9cf0c772e2-logs\") pod \"cinder-api-0\" (UID: \"66e1918d-216f-47bb-abb8-3b9cf0c772e2\") " pod="openstack/cinder-api-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.902705 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66e1918d-216f-47bb-abb8-3b9cf0c772e2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"66e1918d-216f-47bb-abb8-3b9cf0c772e2\") " pod="openstack/cinder-api-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.903446 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66e1918d-216f-47bb-abb8-3b9cf0c772e2-config-data-custom\") pod \"cinder-api-0\" (UID: \"66e1918d-216f-47bb-abb8-3b9cf0c772e2\") " pod="openstack/cinder-api-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.904567 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66e1918d-216f-47bb-abb8-3b9cf0c772e2-config-data\") pod \"cinder-api-0\" (UID: \"66e1918d-216f-47bb-abb8-3b9cf0c772e2\") " pod="openstack/cinder-api-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.905889 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66e1918d-216f-47bb-abb8-3b9cf0c772e2-scripts\") pod \"cinder-api-0\" (UID: \"66e1918d-216f-47bb-abb8-3b9cf0c772e2\") " pod="openstack/cinder-api-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.918019 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrlzz\" (UniqueName: \"kubernetes.io/projected/66e1918d-216f-47bb-abb8-3b9cf0c772e2-kube-api-access-nrlzz\") pod \"cinder-api-0\" (UID: \"66e1918d-216f-47bb-abb8-3b9cf0c772e2\") " pod="openstack/cinder-api-0" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.918470 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c548f5455-gc5z9" Jan 30 07:01:02 crc kubenswrapper[4520]: I0130 07:01:02.985133 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 07:01:03 crc kubenswrapper[4520]: I0130 07:01:03.154099 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 07:01:03 crc kubenswrapper[4520]: I0130 07:01:03.284192 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bfcf6757f-bv4bw" podUID="da8fcfab-9e74-40b5-87a2-a771a93c64e3" containerName="dnsmasq-dns" containerID="cri-o://ddff8dda2af1a5263ed64197d1c5d56567de961d5ef5630550396ef43c9ff9eb" gracePeriod=10 Jan 30 07:01:03 crc kubenswrapper[4520]: I0130 07:01:03.284488 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bfcf6757f-bv4bw" event={"ID":"da8fcfab-9e74-40b5-87a2-a771a93c64e3","Type":"ContainerStarted","Data":"ddff8dda2af1a5263ed64197d1c5d56567de961d5ef5630550396ef43c9ff9eb"} Jan 30 07:01:03 crc kubenswrapper[4520]: I0130 07:01:03.284968 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bfcf6757f-bv4bw" Jan 30 07:01:03 crc kubenswrapper[4520]: I0130 07:01:03.318443 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bfcf6757f-bv4bw" podStartSLOduration=4.318431597 podStartE2EDuration="4.318431597s" podCreationTimestamp="2026-01-30 07:00:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:01:03.314897814 +0000 UTC m=+976.943249995" watchObservedRunningTime="2026-01-30 07:01:03.318431597 +0000 UTC m=+976.946783779" Jan 30 07:01:03 crc kubenswrapper[4520]: I0130 07:01:03.470355 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c548f5455-gc5z9"] Jan 30 07:01:03 crc kubenswrapper[4520]: I0130 07:01:03.496092 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 07:01:04 crc kubenswrapper[4520]: I0130 07:01:04.170471 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-c459697cb-g922m" podUID="3380703e-5659-4040-8b43-e3ada0eaa6b6" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 30 07:01:04 crc kubenswrapper[4520]: I0130 07:01:04.293404 4520 generic.go:334] "Generic (PLEG): container finished" podID="07aa3f61-cfcb-4aa2-8430-e4f800dbf572" containerID="c300fc62e1373c388229a82c0d2f920a528002128dc058a25a8b291ab97f13c0" exitCode=0 Jan 30 07:01:04 crc kubenswrapper[4520]: I0130 07:01:04.293458 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7445dc46fc-s424z" event={"ID":"07aa3f61-cfcb-4aa2-8430-e4f800dbf572","Type":"ContainerDied","Data":"c300fc62e1373c388229a82c0d2f920a528002128dc058a25a8b291ab97f13c0"} Jan 30 07:01:04 crc kubenswrapper[4520]: I0130 07:01:04.298335 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c548f5455-gc5z9" event={"ID":"885d7c94-3859-4ab4-a1e1-203588ca6f3c","Type":"ContainerStarted","Data":"fb8dcafdc933684e5505b91db15cad05c8816c935ee2bf8a69edd90284d45669"} Jan 30 07:01:04 crc kubenswrapper[4520]: I0130 07:01:04.301996 4520 generic.go:334] "Generic (PLEG): container finished" podID="da8fcfab-9e74-40b5-87a2-a771a93c64e3" containerID="ddff8dda2af1a5263ed64197d1c5d56567de961d5ef5630550396ef43c9ff9eb" exitCode=0 Jan 30 07:01:04 crc kubenswrapper[4520]: I0130 07:01:04.302064 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bfcf6757f-bv4bw" event={"ID":"da8fcfab-9e74-40b5-87a2-a771a93c64e3","Type":"ContainerDied","Data":"ddff8dda2af1a5263ed64197d1c5d56567de961d5ef5630550396ef43c9ff9eb"} Jan 30 07:01:04 crc kubenswrapper[4520]: I0130 07:01:04.304079 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ede99291-73df-453d-80f2-3e4744245bb4","Type":"ContainerStarted","Data":"4457fe1b790d671fa38cbbc60033f451df981f81220cb9508553e9764c082fa1"} Jan 30 07:01:04 crc kubenswrapper[4520]: I0130 07:01:04.372867 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-d9dd85bbd-2g75n" podUID="bcc0bac1-6294-432a-8703-fbef10b2a44f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 30 07:01:06 crc kubenswrapper[4520]: I0130 07:01:06.018533 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 07:01:06 crc kubenswrapper[4520]: I0130 07:01:06.886935 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-56c9b4b8d6-x299t"] Jan 30 07:01:06 crc kubenswrapper[4520]: I0130 07:01:06.888365 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56c9b4b8d6-x299t" Jan 30 07:01:06 crc kubenswrapper[4520]: I0130 07:01:06.890299 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 30 07:01:06 crc kubenswrapper[4520]: I0130 07:01:06.890566 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 30 07:01:06 crc kubenswrapper[4520]: I0130 07:01:06.904335 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-56c9b4b8d6-x299t"] Jan 30 07:01:07 crc kubenswrapper[4520]: I0130 07:01:07.029328 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8e25f39-7521-4108-8a84-55c59c846780-config-data\") pod \"barbican-api-56c9b4b8d6-x299t\" (UID: \"e8e25f39-7521-4108-8a84-55c59c846780\") " pod="openstack/barbican-api-56c9b4b8d6-x299t" Jan 30 07:01:07 crc kubenswrapper[4520]: I0130 07:01:07.029617 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxlvx\" (UniqueName: \"kubernetes.io/projected/e8e25f39-7521-4108-8a84-55c59c846780-kube-api-access-qxlvx\") pod \"barbican-api-56c9b4b8d6-x299t\" (UID: \"e8e25f39-7521-4108-8a84-55c59c846780\") " pod="openstack/barbican-api-56c9b4b8d6-x299t" Jan 30 07:01:07 crc kubenswrapper[4520]: I0130 07:01:07.029687 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8e25f39-7521-4108-8a84-55c59c846780-combined-ca-bundle\") pod \"barbican-api-56c9b4b8d6-x299t\" (UID: \"e8e25f39-7521-4108-8a84-55c59c846780\") " pod="openstack/barbican-api-56c9b4b8d6-x299t" Jan 30 07:01:07 crc kubenswrapper[4520]: I0130 07:01:07.029707 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8e25f39-7521-4108-8a84-55c59c846780-internal-tls-certs\") pod \"barbican-api-56c9b4b8d6-x299t\" (UID: \"e8e25f39-7521-4108-8a84-55c59c846780\") " pod="openstack/barbican-api-56c9b4b8d6-x299t" Jan 30 07:01:07 crc kubenswrapper[4520]: I0130 07:01:07.029751 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8e25f39-7521-4108-8a84-55c59c846780-logs\") pod \"barbican-api-56c9b4b8d6-x299t\" (UID: \"e8e25f39-7521-4108-8a84-55c59c846780\") " pod="openstack/barbican-api-56c9b4b8d6-x299t" Jan 30 07:01:07 crc kubenswrapper[4520]: I0130 07:01:07.029774 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e8e25f39-7521-4108-8a84-55c59c846780-config-data-custom\") pod \"barbican-api-56c9b4b8d6-x299t\" (UID: \"e8e25f39-7521-4108-8a84-55c59c846780\") " pod="openstack/barbican-api-56c9b4b8d6-x299t" Jan 30 07:01:07 crc kubenswrapper[4520]: I0130 07:01:07.029793 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8e25f39-7521-4108-8a84-55c59c846780-public-tls-certs\") pod \"barbican-api-56c9b4b8d6-x299t\" (UID: \"e8e25f39-7521-4108-8a84-55c59c846780\") " pod="openstack/barbican-api-56c9b4b8d6-x299t" Jan 30 07:01:07 crc kubenswrapper[4520]: I0130 07:01:07.130503 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8e25f39-7521-4108-8a84-55c59c846780-combined-ca-bundle\") pod \"barbican-api-56c9b4b8d6-x299t\" (UID: \"e8e25f39-7521-4108-8a84-55c59c846780\") " pod="openstack/barbican-api-56c9b4b8d6-x299t" Jan 30 07:01:07 crc kubenswrapper[4520]: I0130 07:01:07.130847 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8e25f39-7521-4108-8a84-55c59c846780-internal-tls-certs\") pod \"barbican-api-56c9b4b8d6-x299t\" (UID: \"e8e25f39-7521-4108-8a84-55c59c846780\") " pod="openstack/barbican-api-56c9b4b8d6-x299t" Jan 30 07:01:07 crc kubenswrapper[4520]: I0130 07:01:07.131010 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8e25f39-7521-4108-8a84-55c59c846780-logs\") pod \"barbican-api-56c9b4b8d6-x299t\" (UID: \"e8e25f39-7521-4108-8a84-55c59c846780\") " pod="openstack/barbican-api-56c9b4b8d6-x299t" Jan 30 07:01:07 crc kubenswrapper[4520]: I0130 07:01:07.132270 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e8e25f39-7521-4108-8a84-55c59c846780-config-data-custom\") pod \"barbican-api-56c9b4b8d6-x299t\" (UID: \"e8e25f39-7521-4108-8a84-55c59c846780\") " pod="openstack/barbican-api-56c9b4b8d6-x299t" Jan 30 07:01:07 crc kubenswrapper[4520]: I0130 07:01:07.132404 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8e25f39-7521-4108-8a84-55c59c846780-public-tls-certs\") pod \"barbican-api-56c9b4b8d6-x299t\" (UID: \"e8e25f39-7521-4108-8a84-55c59c846780\") " pod="openstack/barbican-api-56c9b4b8d6-x299t" Jan 30 07:01:07 crc kubenswrapper[4520]: I0130 07:01:07.132569 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8e25f39-7521-4108-8a84-55c59c846780-config-data\") pod \"barbican-api-56c9b4b8d6-x299t\" (UID: \"e8e25f39-7521-4108-8a84-55c59c846780\") " pod="openstack/barbican-api-56c9b4b8d6-x299t" Jan 30 07:01:07 crc kubenswrapper[4520]: I0130 07:01:07.132685 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxlvx\" (UniqueName: \"kubernetes.io/projected/e8e25f39-7521-4108-8a84-55c59c846780-kube-api-access-qxlvx\") pod \"barbican-api-56c9b4b8d6-x299t\" (UID: \"e8e25f39-7521-4108-8a84-55c59c846780\") " pod="openstack/barbican-api-56c9b4b8d6-x299t" Jan 30 07:01:07 crc kubenswrapper[4520]: I0130 07:01:07.132292 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8e25f39-7521-4108-8a84-55c59c846780-logs\") pod \"barbican-api-56c9b4b8d6-x299t\" (UID: \"e8e25f39-7521-4108-8a84-55c59c846780\") " pod="openstack/barbican-api-56c9b4b8d6-x299t" Jan 30 07:01:07 crc kubenswrapper[4520]: I0130 07:01:07.151063 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8e25f39-7521-4108-8a84-55c59c846780-internal-tls-certs\") pod \"barbican-api-56c9b4b8d6-x299t\" (UID: \"e8e25f39-7521-4108-8a84-55c59c846780\") " pod="openstack/barbican-api-56c9b4b8d6-x299t" Jan 30 07:01:07 crc kubenswrapper[4520]: I0130 07:01:07.152143 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8e25f39-7521-4108-8a84-55c59c846780-public-tls-certs\") pod \"barbican-api-56c9b4b8d6-x299t\" (UID: \"e8e25f39-7521-4108-8a84-55c59c846780\") " pod="openstack/barbican-api-56c9b4b8d6-x299t" Jan 30 07:01:07 crc kubenswrapper[4520]: I0130 07:01:07.152406 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8e25f39-7521-4108-8a84-55c59c846780-combined-ca-bundle\") pod \"barbican-api-56c9b4b8d6-x299t\" (UID: \"e8e25f39-7521-4108-8a84-55c59c846780\") " pod="openstack/barbican-api-56c9b4b8d6-x299t" Jan 30 07:01:07 crc kubenswrapper[4520]: I0130 07:01:07.152973 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e8e25f39-7521-4108-8a84-55c59c846780-config-data-custom\") pod \"barbican-api-56c9b4b8d6-x299t\" (UID: \"e8e25f39-7521-4108-8a84-55c59c846780\") " pod="openstack/barbican-api-56c9b4b8d6-x299t" Jan 30 07:01:07 crc kubenswrapper[4520]: I0130 07:01:07.160501 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8e25f39-7521-4108-8a84-55c59c846780-config-data\") pod \"barbican-api-56c9b4b8d6-x299t\" (UID: \"e8e25f39-7521-4108-8a84-55c59c846780\") " pod="openstack/barbican-api-56c9b4b8d6-x299t" Jan 30 07:01:07 crc kubenswrapper[4520]: I0130 07:01:07.168110 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxlvx\" (UniqueName: \"kubernetes.io/projected/e8e25f39-7521-4108-8a84-55c59c846780-kube-api-access-qxlvx\") pod \"barbican-api-56c9b4b8d6-x299t\" (UID: \"e8e25f39-7521-4108-8a84-55c59c846780\") " pod="openstack/barbican-api-56c9b4b8d6-x299t" Jan 30 07:01:07 crc kubenswrapper[4520]: I0130 07:01:07.208923 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56c9b4b8d6-x299t" Jan 30 07:01:07 crc kubenswrapper[4520]: I0130 07:01:07.332147 4520 generic.go:334] "Generic (PLEG): container finished" podID="5aa70617-4bf1-4555-886c-988e24cd5198" containerID="14775b45de2129e581da9cf84e11fec78e0c0154ddc3ea973b4c44124cd49c98" exitCode=0 Jan 30 07:01:07 crc kubenswrapper[4520]: I0130 07:01:07.332194 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29495941-kdn5b" event={"ID":"5aa70617-4bf1-4555-886c-988e24cd5198","Type":"ContainerDied","Data":"14775b45de2129e581da9cf84e11fec78e0c0154ddc3ea973b4c44124cd49c98"} Jan 30 07:01:09 crc kubenswrapper[4520]: I0130 07:01:09.781352 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bfcf6757f-bv4bw" Jan 30 07:01:09 crc kubenswrapper[4520]: I0130 07:01:09.787345 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29495941-kdn5b" Jan 30 07:01:09 crc kubenswrapper[4520]: I0130 07:01:09.789567 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7445dc46fc-s424z" Jan 30 07:01:09 crc kubenswrapper[4520]: I0130 07:01:09.905000 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hh7nd\" (UniqueName: \"kubernetes.io/projected/da8fcfab-9e74-40b5-87a2-a771a93c64e3-kube-api-access-hh7nd\") pod \"da8fcfab-9e74-40b5-87a2-a771a93c64e3\" (UID: \"da8fcfab-9e74-40b5-87a2-a771a93c64e3\") " Jan 30 07:01:09 crc kubenswrapper[4520]: I0130 07:01:09.905349 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da8fcfab-9e74-40b5-87a2-a771a93c64e3-dns-swift-storage-0\") pod \"da8fcfab-9e74-40b5-87a2-a771a93c64e3\" (UID: \"da8fcfab-9e74-40b5-87a2-a771a93c64e3\") " Jan 30 07:01:09 crc kubenswrapper[4520]: I0130 07:01:09.905379 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n54zb\" (UniqueName: \"kubernetes.io/projected/5aa70617-4bf1-4555-886c-988e24cd5198-kube-api-access-n54zb\") pod \"5aa70617-4bf1-4555-886c-988e24cd5198\" (UID: \"5aa70617-4bf1-4555-886c-988e24cd5198\") " Jan 30 07:01:09 crc kubenswrapper[4520]: I0130 07:01:09.905425 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da8fcfab-9e74-40b5-87a2-a771a93c64e3-ovsdbserver-nb\") pod \"da8fcfab-9e74-40b5-87a2-a771a93c64e3\" (UID: \"da8fcfab-9e74-40b5-87a2-a771a93c64e3\") " Jan 30 07:01:09 crc kubenswrapper[4520]: I0130 07:01:09.905447 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da8fcfab-9e74-40b5-87a2-a771a93c64e3-ovsdbserver-sb\") pod \"da8fcfab-9e74-40b5-87a2-a771a93c64e3\" (UID: \"da8fcfab-9e74-40b5-87a2-a771a93c64e3\") " Jan 30 07:01:09 crc kubenswrapper[4520]: I0130 07:01:09.905543 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da8fcfab-9e74-40b5-87a2-a771a93c64e3-dns-svc\") pod \"da8fcfab-9e74-40b5-87a2-a771a93c64e3\" (UID: \"da8fcfab-9e74-40b5-87a2-a771a93c64e3\") " Jan 30 07:01:09 crc kubenswrapper[4520]: I0130 07:01:09.905601 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-ovndb-tls-certs\") pod \"07aa3f61-cfcb-4aa2-8430-e4f800dbf572\" (UID: \"07aa3f61-cfcb-4aa2-8430-e4f800dbf572\") " Jan 30 07:01:09 crc kubenswrapper[4520]: I0130 07:01:09.905620 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aa70617-4bf1-4555-886c-988e24cd5198-config-data\") pod \"5aa70617-4bf1-4555-886c-988e24cd5198\" (UID: \"5aa70617-4bf1-4555-886c-988e24cd5198\") " Jan 30 07:01:09 crc kubenswrapper[4520]: I0130 07:01:09.905684 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5aa70617-4bf1-4555-886c-988e24cd5198-fernet-keys\") pod \"5aa70617-4bf1-4555-886c-988e24cd5198\" (UID: \"5aa70617-4bf1-4555-886c-988e24cd5198\") " Jan 30 07:01:09 crc kubenswrapper[4520]: I0130 07:01:09.905767 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aa70617-4bf1-4555-886c-988e24cd5198-combined-ca-bundle\") pod \"5aa70617-4bf1-4555-886c-988e24cd5198\" (UID: \"5aa70617-4bf1-4555-886c-988e24cd5198\") " Jan 30 07:01:09 crc kubenswrapper[4520]: I0130 07:01:09.905973 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-public-tls-certs\") pod \"07aa3f61-cfcb-4aa2-8430-e4f800dbf572\" (UID: \"07aa3f61-cfcb-4aa2-8430-e4f800dbf572\") " Jan 30 07:01:09 crc kubenswrapper[4520]: I0130 07:01:09.906008 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlthh\" (UniqueName: \"kubernetes.io/projected/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-kube-api-access-tlthh\") pod \"07aa3f61-cfcb-4aa2-8430-e4f800dbf572\" (UID: \"07aa3f61-cfcb-4aa2-8430-e4f800dbf572\") " Jan 30 07:01:09 crc kubenswrapper[4520]: I0130 07:01:09.906777 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-combined-ca-bundle\") pod \"07aa3f61-cfcb-4aa2-8430-e4f800dbf572\" (UID: \"07aa3f61-cfcb-4aa2-8430-e4f800dbf572\") " Jan 30 07:01:09 crc kubenswrapper[4520]: I0130 07:01:09.906838 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da8fcfab-9e74-40b5-87a2-a771a93c64e3-config\") pod \"da8fcfab-9e74-40b5-87a2-a771a93c64e3\" (UID: \"da8fcfab-9e74-40b5-87a2-a771a93c64e3\") " Jan 30 07:01:09 crc kubenswrapper[4520]: I0130 07:01:09.906886 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-config\") pod \"07aa3f61-cfcb-4aa2-8430-e4f800dbf572\" (UID: \"07aa3f61-cfcb-4aa2-8430-e4f800dbf572\") " Jan 30 07:01:09 crc kubenswrapper[4520]: I0130 07:01:09.907118 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-internal-tls-certs\") pod \"07aa3f61-cfcb-4aa2-8430-e4f800dbf572\" (UID: \"07aa3f61-cfcb-4aa2-8430-e4f800dbf572\") " Jan 30 07:01:09 crc kubenswrapper[4520]: I0130 07:01:09.907251 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-httpd-config\") pod \"07aa3f61-cfcb-4aa2-8430-e4f800dbf572\" (UID: \"07aa3f61-cfcb-4aa2-8430-e4f800dbf572\") " Jan 30 07:01:09 crc kubenswrapper[4520]: I0130 07:01:09.926391 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5aa70617-4bf1-4555-886c-988e24cd5198-kube-api-access-n54zb" (OuterVolumeSpecName: "kube-api-access-n54zb") pod "5aa70617-4bf1-4555-886c-988e24cd5198" (UID: "5aa70617-4bf1-4555-886c-988e24cd5198"). InnerVolumeSpecName "kube-api-access-n54zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:01:09 crc kubenswrapper[4520]: I0130 07:01:09.952440 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "07aa3f61-cfcb-4aa2-8430-e4f800dbf572" (UID: "07aa3f61-cfcb-4aa2-8430-e4f800dbf572"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:09 crc kubenswrapper[4520]: I0130 07:01:09.969394 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aa70617-4bf1-4555-886c-988e24cd5198-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5aa70617-4bf1-4555-886c-988e24cd5198" (UID: "5aa70617-4bf1-4555-886c-988e24cd5198"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:09 crc kubenswrapper[4520]: I0130 07:01:09.969733 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da8fcfab-9e74-40b5-87a2-a771a93c64e3-kube-api-access-hh7nd" (OuterVolumeSpecName: "kube-api-access-hh7nd") pod "da8fcfab-9e74-40b5-87a2-a771a93c64e3" (UID: "da8fcfab-9e74-40b5-87a2-a771a93c64e3"). InnerVolumeSpecName "kube-api-access-hh7nd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:01:09 crc kubenswrapper[4520]: I0130 07:01:09.970044 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-kube-api-access-tlthh" (OuterVolumeSpecName: "kube-api-access-tlthh") pod "07aa3f61-cfcb-4aa2-8430-e4f800dbf572" (UID: "07aa3f61-cfcb-4aa2-8430-e4f800dbf572"). InnerVolumeSpecName "kube-api-access-tlthh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.010936 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hh7nd\" (UniqueName: \"kubernetes.io/projected/da8fcfab-9e74-40b5-87a2-a771a93c64e3-kube-api-access-hh7nd\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.010972 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n54zb\" (UniqueName: \"kubernetes.io/projected/5aa70617-4bf1-4555-886c-988e24cd5198-kube-api-access-n54zb\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.010983 4520 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5aa70617-4bf1-4555-886c-988e24cd5198-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.010997 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlthh\" (UniqueName: \"kubernetes.io/projected/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-kube-api-access-tlthh\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.011009 4520 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.047143 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aa70617-4bf1-4555-886c-988e24cd5198-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5aa70617-4bf1-4555-886c-988e24cd5198" (UID: "5aa70617-4bf1-4555-886c-988e24cd5198"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.074615 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da8fcfab-9e74-40b5-87a2-a771a93c64e3-config" (OuterVolumeSpecName: "config") pod "da8fcfab-9e74-40b5-87a2-a771a93c64e3" (UID: "da8fcfab-9e74-40b5-87a2-a771a93c64e3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.098508 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "07aa3f61-cfcb-4aa2-8430-e4f800dbf572" (UID: "07aa3f61-cfcb-4aa2-8430-e4f800dbf572"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.103503 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-config" (OuterVolumeSpecName: "config") pod "07aa3f61-cfcb-4aa2-8430-e4f800dbf572" (UID: "07aa3f61-cfcb-4aa2-8430-e4f800dbf572"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.116951 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aa70617-4bf1-4555-886c-988e24cd5198-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.116975 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da8fcfab-9e74-40b5-87a2-a771a93c64e3-config\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.116988 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-config\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.117008 4520 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.117159 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07aa3f61-cfcb-4aa2-8430-e4f800dbf572" (UID: "07aa3f61-cfcb-4aa2-8430-e4f800dbf572"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.128575 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "07aa3f61-cfcb-4aa2-8430-e4f800dbf572" (UID: "07aa3f61-cfcb-4aa2-8430-e4f800dbf572"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.173527 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da8fcfab-9e74-40b5-87a2-a771a93c64e3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "da8fcfab-9e74-40b5-87a2-a771a93c64e3" (UID: "da8fcfab-9e74-40b5-87a2-a771a93c64e3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.176393 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da8fcfab-9e74-40b5-87a2-a771a93c64e3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "da8fcfab-9e74-40b5-87a2-a771a93c64e3" (UID: "da8fcfab-9e74-40b5-87a2-a771a93c64e3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.195904 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da8fcfab-9e74-40b5-87a2-a771a93c64e3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "da8fcfab-9e74-40b5-87a2-a771a93c64e3" (UID: "da8fcfab-9e74-40b5-87a2-a771a93c64e3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.201119 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aa70617-4bf1-4555-886c-988e24cd5198-config-data" (OuterVolumeSpecName: "config-data") pod "5aa70617-4bf1-4555-886c-988e24cd5198" (UID: "5aa70617-4bf1-4555-886c-988e24cd5198"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.201151 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da8fcfab-9e74-40b5-87a2-a771a93c64e3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "da8fcfab-9e74-40b5-87a2-a771a93c64e3" (UID: "da8fcfab-9e74-40b5-87a2-a771a93c64e3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.217595 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "07aa3f61-cfcb-4aa2-8430-e4f800dbf572" (UID: "07aa3f61-cfcb-4aa2-8430-e4f800dbf572"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.219026 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.219053 4520 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da8fcfab-9e74-40b5-87a2-a771a93c64e3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.219064 4520 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da8fcfab-9e74-40b5-87a2-a771a93c64e3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.219074 4520 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da8fcfab-9e74-40b5-87a2-a771a93c64e3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.219084 4520 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da8fcfab-9e74-40b5-87a2-a771a93c64e3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.219092 4520 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.219101 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aa70617-4bf1-4555-886c-988e24cd5198-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.219108 4520 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07aa3f61-cfcb-4aa2-8430-e4f800dbf572-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.370995 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7445dc46fc-s424z" Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.372697 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7445dc46fc-s424z" event={"ID":"07aa3f61-cfcb-4aa2-8430-e4f800dbf572","Type":"ContainerDied","Data":"8b6d62a427a1144ed02fa2512156c0c6e62e866ada8f94b894bb531ee62d86f4"} Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.372759 4520 scope.go:117] "RemoveContainer" containerID="be31646e606daa8921125c772c609b179e4fdced55dbbd3d1d7da3abaff7801a" Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.384127 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bfcf6757f-bv4bw" event={"ID":"da8fcfab-9e74-40b5-87a2-a771a93c64e3","Type":"ContainerDied","Data":"cdf36bce3e9b807b6c60694765274c6958cc0425fbac225031566cdd909c5924"} Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.384229 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bfcf6757f-bv4bw" Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.387314 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29495941-kdn5b" event={"ID":"5aa70617-4bf1-4555-886c-988e24cd5198","Type":"ContainerDied","Data":"5180b31a864360e0c4dc14112c08d79569971e3d921dfbb7b52fa423f7fd1060"} Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.387361 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5180b31a864360e0c4dc14112c08d79569971e3d921dfbb7b52fa423f7fd1060" Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.389216 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29495941-kdn5b" Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.416430 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7445dc46fc-s424z"] Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.444179 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7445dc46fc-s424z"] Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.456261 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bfcf6757f-bv4bw"] Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.485027 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bfcf6757f-bv4bw"] Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.698096 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07aa3f61-cfcb-4aa2-8430-e4f800dbf572" path="/var/lib/kubelet/pods/07aa3f61-cfcb-4aa2-8430-e4f800dbf572/volumes" Jan 30 07:01:10 crc kubenswrapper[4520]: I0130 07:01:10.699065 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da8fcfab-9e74-40b5-87a2-a771a93c64e3" path="/var/lib/kubelet/pods/da8fcfab-9e74-40b5-87a2-a771a93c64e3/volumes" Jan 30 07:01:11 crc kubenswrapper[4520]: I0130 07:01:11.232649 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 07:01:11 crc kubenswrapper[4520]: I0130 07:01:11.247728 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cc56s" Jan 30 07:01:11 crc kubenswrapper[4520]: I0130 07:01:11.293202 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cc56s" Jan 30 07:01:11 crc kubenswrapper[4520]: I0130 07:01:11.405570 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-547b6f779b-dz8tp" Jan 30 07:01:11 crc kubenswrapper[4520]: I0130 07:01:11.463409 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-547b6f779b-dz8tp" Jan 30 07:01:11 crc kubenswrapper[4520]: I0130 07:01:11.474389 4520 scope.go:117] "RemoveContainer" containerID="c300fc62e1373c388229a82c0d2f920a528002128dc058a25a8b291ab97f13c0" Jan 30 07:01:11 crc kubenswrapper[4520]: W0130 07:01:11.478837 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66e1918d_216f_47bb_abb8_3b9cf0c772e2.slice/crio-805adb61b23fbeb439c8bc7603b7f3f8ada9aa21161b7b4deda466551590288a WatchSource:0}: Error finding container 805adb61b23fbeb439c8bc7603b7f3f8ada9aa21161b7b4deda466551590288a: Status 404 returned error can't find the container with id 805adb61b23fbeb439c8bc7603b7f3f8ada9aa21161b7b4deda466551590288a Jan 30 07:01:11 crc kubenswrapper[4520]: I0130 07:01:11.505404 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cc56s"] Jan 30 07:01:11 crc kubenswrapper[4520]: I0130 07:01:11.809824 4520 scope.go:117] "RemoveContainer" containerID="ddff8dda2af1a5263ed64197d1c5d56567de961d5ef5630550396ef43c9ff9eb" Jan 30 07:01:11 crc kubenswrapper[4520]: I0130 07:01:11.947147 4520 scope.go:117] "RemoveContainer" containerID="5d2646d1e5c77117451b3b0398e8dae26c360102516e5822630ce22929d349b3" Jan 30 07:01:11 crc kubenswrapper[4520]: I0130 07:01:11.996459 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-56c9b4b8d6-x299t"] Jan 30 07:01:12 crc kubenswrapper[4520]: I0130 07:01:12.437623 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56c9b4b8d6-x299t" event={"ID":"e8e25f39-7521-4108-8a84-55c59c846780","Type":"ContainerStarted","Data":"ffb41d8dd2bb9d9e7e86eb8b2c3437edecbb8d6b2fd126db691f51bba611af53"} Jan 30 07:01:12 crc kubenswrapper[4520]: I0130 07:01:12.437883 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56c9b4b8d6-x299t" event={"ID":"e8e25f39-7521-4108-8a84-55c59c846780","Type":"ContainerStarted","Data":"f4f9a903a9c35ae654c4298315f4f4bbc504f11d7c9a5898602edc8658a7416a"} Jan 30 07:01:12 crc kubenswrapper[4520]: I0130 07:01:12.443205 4520 generic.go:334] "Generic (PLEG): container finished" podID="885d7c94-3859-4ab4-a1e1-203588ca6f3c" containerID="dc83f2c670db04565c276722e08f2334797b451eee1d26537724198d6b24b763" exitCode=0 Jan 30 07:01:12 crc kubenswrapper[4520]: I0130 07:01:12.443315 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c548f5455-gc5z9" event={"ID":"885d7c94-3859-4ab4-a1e1-203588ca6f3c","Type":"ContainerDied","Data":"dc83f2c670db04565c276722e08f2334797b451eee1d26537724198d6b24b763"} Jan 30 07:01:12 crc kubenswrapper[4520]: I0130 07:01:12.449004 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"66e1918d-216f-47bb-abb8-3b9cf0c772e2","Type":"ContainerStarted","Data":"805adb61b23fbeb439c8bc7603b7f3f8ada9aa21161b7b4deda466551590288a"} Jan 30 07:01:12 crc kubenswrapper[4520]: I0130 07:01:12.478670 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4efe190c-047a-4463-9044-515816c2a7e1","Type":"ContainerStarted","Data":"ab7c08275975f588088eb7f95ddb84450d1703b061e2d35dedc379c7e583433c"} Jan 30 07:01:12 crc kubenswrapper[4520]: I0130 07:01:12.478885 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4efe190c-047a-4463-9044-515816c2a7e1" containerName="ceilometer-central-agent" containerID="cri-o://21536f66b139a50e5fd8cfe52814b014f0f4fa2d3f0e61b68d8c97bd5b1ea26f" gracePeriod=30 Jan 30 07:01:12 crc kubenswrapper[4520]: I0130 07:01:12.479295 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4efe190c-047a-4463-9044-515816c2a7e1" containerName="proxy-httpd" containerID="cri-o://ab7c08275975f588088eb7f95ddb84450d1703b061e2d35dedc379c7e583433c" gracePeriod=30 Jan 30 07:01:12 crc kubenswrapper[4520]: I0130 07:01:12.479352 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4efe190c-047a-4463-9044-515816c2a7e1" containerName="sg-core" containerID="cri-o://b126a1dcbdaa0eaa43f16ac1da4cb06c30fc0fc7e894f73eec11a72c209753e8" gracePeriod=30 Jan 30 07:01:12 crc kubenswrapper[4520]: I0130 07:01:12.479426 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4efe190c-047a-4463-9044-515816c2a7e1" containerName="ceilometer-notification-agent" containerID="cri-o://047e969204cfcb2f6398ccfc5932c7060b7a8b55c1a6c94163906219a8c6de03" gracePeriod=30 Jan 30 07:01:12 crc kubenswrapper[4520]: I0130 07:01:12.479492 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 07:01:12 crc kubenswrapper[4520]: I0130 07:01:12.492338 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c5d4f857d-ww6k4" event={"ID":"eed2b222-c964-4e11-914d-e3f45b8b4b02","Type":"ContainerStarted","Data":"612cf79f3f61370a34e35ea8fad9edbad073ecd928fcee3c29ff827d206bc700"} Jan 30 07:01:12 crc kubenswrapper[4520]: I0130 07:01:12.504598 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68cd6684c9-j8kr8" event={"ID":"7b9415b7-ddcf-40e8-b404-51911e38b5c7","Type":"ContainerStarted","Data":"87ce8b1cb0e3ebec05c65105bd2fbca567009645dca3aa19b200a0d835e19045"} Jan 30 07:01:12 crc kubenswrapper[4520]: I0130 07:01:12.504875 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cc56s" podUID="be78f3d6-9a68-4858-8d5b-a2fe0ea03050" containerName="registry-server" containerID="cri-o://1a52461fb40eaed91a79eed483675c178cee92381980d36d28f755f46ff0fcfd" gracePeriod=2 Jan 30 07:01:12 crc kubenswrapper[4520]: I0130 07:01:12.525704 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.542259018 podStartE2EDuration="1m18.525683395s" podCreationTimestamp="2026-01-30 06:59:54 +0000 UTC" firstStartedPulling="2026-01-30 06:59:56.860206544 +0000 UTC m=+910.488558724" lastFinishedPulling="2026-01-30 07:01:11.84363092 +0000 UTC m=+985.471983101" observedRunningTime="2026-01-30 07:01:12.502707819 +0000 UTC m=+986.131060000" watchObservedRunningTime="2026-01-30 07:01:12.525683395 +0000 UTC m=+986.154035576" Jan 30 07:01:12 crc kubenswrapper[4520]: I0130 07:01:12.526846 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-68cd6684c9-j8kr8" podStartSLOduration=2.918592645 podStartE2EDuration="13.526836613s" podCreationTimestamp="2026-01-30 07:00:59 +0000 UTC" firstStartedPulling="2026-01-30 07:01:00.172894625 +0000 UTC m=+973.801246806" lastFinishedPulling="2026-01-30 07:01:10.781138592 +0000 UTC m=+984.409490774" observedRunningTime="2026-01-30 07:01:12.521835941 +0000 UTC m=+986.150188122" watchObservedRunningTime="2026-01-30 07:01:12.526836613 +0000 UTC m=+986.155188793" Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.217774 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cc56s" Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.306649 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be78f3d6-9a68-4858-8d5b-a2fe0ea03050-utilities\") pod \"be78f3d6-9a68-4858-8d5b-a2fe0ea03050\" (UID: \"be78f3d6-9a68-4858-8d5b-a2fe0ea03050\") " Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.306779 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be78f3d6-9a68-4858-8d5b-a2fe0ea03050-catalog-content\") pod \"be78f3d6-9a68-4858-8d5b-a2fe0ea03050\" (UID: \"be78f3d6-9a68-4858-8d5b-a2fe0ea03050\") " Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.306807 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjk65\" (UniqueName: \"kubernetes.io/projected/be78f3d6-9a68-4858-8d5b-a2fe0ea03050-kube-api-access-wjk65\") pod \"be78f3d6-9a68-4858-8d5b-a2fe0ea03050\" (UID: \"be78f3d6-9a68-4858-8d5b-a2fe0ea03050\") " Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.313398 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be78f3d6-9a68-4858-8d5b-a2fe0ea03050-utilities" (OuterVolumeSpecName: "utilities") pod "be78f3d6-9a68-4858-8d5b-a2fe0ea03050" (UID: "be78f3d6-9a68-4858-8d5b-a2fe0ea03050"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.313662 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be78f3d6-9a68-4858-8d5b-a2fe0ea03050-kube-api-access-wjk65" (OuterVolumeSpecName: "kube-api-access-wjk65") pod "be78f3d6-9a68-4858-8d5b-a2fe0ea03050" (UID: "be78f3d6-9a68-4858-8d5b-a2fe0ea03050"). InnerVolumeSpecName "kube-api-access-wjk65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.337757 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be78f3d6-9a68-4858-8d5b-a2fe0ea03050-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "be78f3d6-9a68-4858-8d5b-a2fe0ea03050" (UID: "be78f3d6-9a68-4858-8d5b-a2fe0ea03050"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.410188 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be78f3d6-9a68-4858-8d5b-a2fe0ea03050-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.410220 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjk65\" (UniqueName: \"kubernetes.io/projected/be78f3d6-9a68-4858-8d5b-a2fe0ea03050-kube-api-access-wjk65\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.410233 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be78f3d6-9a68-4858-8d5b-a2fe0ea03050-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.528737 4520 generic.go:334] "Generic (PLEG): container finished" podID="be78f3d6-9a68-4858-8d5b-a2fe0ea03050" containerID="1a52461fb40eaed91a79eed483675c178cee92381980d36d28f755f46ff0fcfd" exitCode=0 Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.528852 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cc56s" Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.528917 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cc56s" event={"ID":"be78f3d6-9a68-4858-8d5b-a2fe0ea03050","Type":"ContainerDied","Data":"1a52461fb40eaed91a79eed483675c178cee92381980d36d28f755f46ff0fcfd"} Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.528972 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cc56s" event={"ID":"be78f3d6-9a68-4858-8d5b-a2fe0ea03050","Type":"ContainerDied","Data":"33fd165f00d9aa97214867245e229de662e05219347dd1233da9579df9e8c08b"} Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.529000 4520 scope.go:117] "RemoveContainer" containerID="1a52461fb40eaed91a79eed483675c178cee92381980d36d28f755f46ff0fcfd" Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.541267 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"66e1918d-216f-47bb-abb8-3b9cf0c772e2","Type":"ContainerStarted","Data":"db74ba2f32a6eabaf757076a0f317b4b0f800c1d61676aae0cd230757650d255"} Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.549253 4520 generic.go:334] "Generic (PLEG): container finished" podID="4efe190c-047a-4463-9044-515816c2a7e1" containerID="ab7c08275975f588088eb7f95ddb84450d1703b061e2d35dedc379c7e583433c" exitCode=0 Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.549293 4520 generic.go:334] "Generic (PLEG): container finished" podID="4efe190c-047a-4463-9044-515816c2a7e1" containerID="b126a1dcbdaa0eaa43f16ac1da4cb06c30fc0fc7e894f73eec11a72c209753e8" exitCode=2 Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.549303 4520 generic.go:334] "Generic (PLEG): container finished" podID="4efe190c-047a-4463-9044-515816c2a7e1" containerID="21536f66b139a50e5fd8cfe52814b014f0f4fa2d3f0e61b68d8c97bd5b1ea26f" exitCode=0 Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.549367 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4efe190c-047a-4463-9044-515816c2a7e1","Type":"ContainerDied","Data":"ab7c08275975f588088eb7f95ddb84450d1703b061e2d35dedc379c7e583433c"} Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.549420 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4efe190c-047a-4463-9044-515816c2a7e1","Type":"ContainerDied","Data":"b126a1dcbdaa0eaa43f16ac1da4cb06c30fc0fc7e894f73eec11a72c209753e8"} Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.549437 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4efe190c-047a-4463-9044-515816c2a7e1","Type":"ContainerDied","Data":"21536f66b139a50e5fd8cfe52814b014f0f4fa2d3f0e61b68d8c97bd5b1ea26f"} Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.565700 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c5d4f857d-ww6k4" event={"ID":"eed2b222-c964-4e11-914d-e3f45b8b4b02","Type":"ContainerStarted","Data":"a08c28e844082ebb8fba9ca6131569d661a08d44e501fe6329e00bb869f99635"} Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.568719 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cc56s"] Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.574426 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cc56s"] Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.577015 4520 scope.go:117] "RemoveContainer" containerID="a673ae0a5fe2790bd626810aa1b095aa245cb369936ccdf6f5202720cc35ac8b" Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.581870 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68cd6684c9-j8kr8" event={"ID":"7b9415b7-ddcf-40e8-b404-51911e38b5c7","Type":"ContainerStarted","Data":"d29864e8ba424d1285e1d940399e292ed8278241e1384de8cbe61a4330e46c6a"} Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.589587 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56c9b4b8d6-x299t" event={"ID":"e8e25f39-7521-4108-8a84-55c59c846780","Type":"ContainerStarted","Data":"4277dd1edf7e233215ec3480c14d30b65f0ad564000c2963df62fb3d3d753932"} Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.590333 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-56c9b4b8d6-x299t" Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.590356 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-56c9b4b8d6-x299t" Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.592144 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c548f5455-gc5z9" event={"ID":"885d7c94-3859-4ab4-a1e1-203588ca6f3c","Type":"ContainerStarted","Data":"398480d01d59f4275a291ca2bdf0d31b32ad1541ccbc7f43e8a5aaac5db2fcac"} Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.592565 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c548f5455-gc5z9" Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.595219 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5c5d4f857d-ww6k4" podStartSLOduration=3.732196585 podStartE2EDuration="14.595202894s" podCreationTimestamp="2026-01-30 07:00:59 +0000 UTC" firstStartedPulling="2026-01-30 07:01:00.588791719 +0000 UTC m=+974.217143900" lastFinishedPulling="2026-01-30 07:01:11.451798028 +0000 UTC m=+985.080150209" observedRunningTime="2026-01-30 07:01:13.587209816 +0000 UTC m=+987.215561997" watchObservedRunningTime="2026-01-30 07:01:13.595202894 +0000 UTC m=+987.223555075" Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.601902 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ede99291-73df-453d-80f2-3e4744245bb4","Type":"ContainerStarted","Data":"55bd13feafcb79075c8b0e65a1bc19bb0d437eb1d5651bf56a25fe7841bf6766"} Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.617868 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-56c9b4b8d6-x299t" podStartSLOduration=7.617857295 podStartE2EDuration="7.617857295s" podCreationTimestamp="2026-01-30 07:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:01:13.616555609 +0000 UTC m=+987.244907790" watchObservedRunningTime="2026-01-30 07:01:13.617857295 +0000 UTC m=+987.246209477" Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.647391 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c548f5455-gc5z9" podStartSLOduration=11.647376613 podStartE2EDuration="11.647376613s" podCreationTimestamp="2026-01-30 07:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:01:13.636576319 +0000 UTC m=+987.264928500" watchObservedRunningTime="2026-01-30 07:01:13.647376613 +0000 UTC m=+987.275728785" Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.691679 4520 scope.go:117] "RemoveContainer" containerID="991ddf1474b1df11e2420824b7776f4f9d84b9f4a5605891a4f6b57d0f46f85c" Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.784136 4520 scope.go:117] "RemoveContainer" containerID="1a52461fb40eaed91a79eed483675c178cee92381980d36d28f755f46ff0fcfd" Jan 30 07:01:13 crc kubenswrapper[4520]: E0130 07:01:13.784811 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a52461fb40eaed91a79eed483675c178cee92381980d36d28f755f46ff0fcfd\": container with ID starting with 1a52461fb40eaed91a79eed483675c178cee92381980d36d28f755f46ff0fcfd not found: ID does not exist" containerID="1a52461fb40eaed91a79eed483675c178cee92381980d36d28f755f46ff0fcfd" Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.784858 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a52461fb40eaed91a79eed483675c178cee92381980d36d28f755f46ff0fcfd"} err="failed to get container status \"1a52461fb40eaed91a79eed483675c178cee92381980d36d28f755f46ff0fcfd\": rpc error: code = NotFound desc = could not find container \"1a52461fb40eaed91a79eed483675c178cee92381980d36d28f755f46ff0fcfd\": container with ID starting with 1a52461fb40eaed91a79eed483675c178cee92381980d36d28f755f46ff0fcfd not found: ID does not exist" Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.784877 4520 scope.go:117] "RemoveContainer" containerID="a673ae0a5fe2790bd626810aa1b095aa245cb369936ccdf6f5202720cc35ac8b" Jan 30 07:01:13 crc kubenswrapper[4520]: E0130 07:01:13.788776 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a673ae0a5fe2790bd626810aa1b095aa245cb369936ccdf6f5202720cc35ac8b\": container with ID starting with a673ae0a5fe2790bd626810aa1b095aa245cb369936ccdf6f5202720cc35ac8b not found: ID does not exist" containerID="a673ae0a5fe2790bd626810aa1b095aa245cb369936ccdf6f5202720cc35ac8b" Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.788840 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a673ae0a5fe2790bd626810aa1b095aa245cb369936ccdf6f5202720cc35ac8b"} err="failed to get container status \"a673ae0a5fe2790bd626810aa1b095aa245cb369936ccdf6f5202720cc35ac8b\": rpc error: code = NotFound desc = could not find container \"a673ae0a5fe2790bd626810aa1b095aa245cb369936ccdf6f5202720cc35ac8b\": container with ID starting with a673ae0a5fe2790bd626810aa1b095aa245cb369936ccdf6f5202720cc35ac8b not found: ID does not exist" Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.788874 4520 scope.go:117] "RemoveContainer" containerID="991ddf1474b1df11e2420824b7776f4f9d84b9f4a5605891a4f6b57d0f46f85c" Jan 30 07:01:13 crc kubenswrapper[4520]: E0130 07:01:13.804785 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"991ddf1474b1df11e2420824b7776f4f9d84b9f4a5605891a4f6b57d0f46f85c\": container with ID starting with 991ddf1474b1df11e2420824b7776f4f9d84b9f4a5605891a4f6b57d0f46f85c not found: ID does not exist" containerID="991ddf1474b1df11e2420824b7776f4f9d84b9f4a5605891a4f6b57d0f46f85c" Jan 30 07:01:13 crc kubenswrapper[4520]: I0130 07:01:13.804842 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"991ddf1474b1df11e2420824b7776f4f9d84b9f4a5605891a4f6b57d0f46f85c"} err="failed to get container status \"991ddf1474b1df11e2420824b7776f4f9d84b9f4a5605891a4f6b57d0f46f85c\": rpc error: code = NotFound desc = could not find container \"991ddf1474b1df11e2420824b7776f4f9d84b9f4a5605891a4f6b57d0f46f85c\": container with ID starting with 991ddf1474b1df11e2420824b7776f4f9d84b9f4a5605891a4f6b57d0f46f85c not found: ID does not exist" Jan 30 07:01:14 crc kubenswrapper[4520]: I0130 07:01:14.615112 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"66e1918d-216f-47bb-abb8-3b9cf0c772e2","Type":"ContainerStarted","Data":"fa9b4deef9400476b18ac76cf6c8c95d5063fa3d1dd27dc118a58f4005905cde"} Jan 30 07:01:14 crc kubenswrapper[4520]: I0130 07:01:14.615655 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="66e1918d-216f-47bb-abb8-3b9cf0c772e2" containerName="cinder-api-log" containerID="cri-o://db74ba2f32a6eabaf757076a0f317b4b0f800c1d61676aae0cd230757650d255" gracePeriod=30 Jan 30 07:01:14 crc kubenswrapper[4520]: I0130 07:01:14.615782 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="66e1918d-216f-47bb-abb8-3b9cf0c772e2" containerName="cinder-api" containerID="cri-o://fa9b4deef9400476b18ac76cf6c8c95d5063fa3d1dd27dc118a58f4005905cde" gracePeriod=30 Jan 30 07:01:14 crc kubenswrapper[4520]: I0130 07:01:14.616072 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 07:01:14 crc kubenswrapper[4520]: I0130 07:01:14.619575 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ede99291-73df-453d-80f2-3e4744245bb4","Type":"ContainerStarted","Data":"7a4a8a1160e596dac5a9d742c3a8bebd6a3092b3edfeb4f5733142143c35f442"} Jan 30 07:01:14 crc kubenswrapper[4520]: I0130 07:01:14.643117 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=12.643087879 podStartE2EDuration="12.643087879s" podCreationTimestamp="2026-01-30 07:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:01:14.634635046 +0000 UTC m=+988.262987218" watchObservedRunningTime="2026-01-30 07:01:14.643087879 +0000 UTC m=+988.271440060" Jan 30 07:01:14 crc kubenswrapper[4520]: I0130 07:01:14.683990 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.718906932 podStartE2EDuration="12.683968056s" podCreationTimestamp="2026-01-30 07:01:02 +0000 UTC" firstStartedPulling="2026-01-30 07:01:03.857125298 +0000 UTC m=+977.485477479" lastFinishedPulling="2026-01-30 07:01:11.822186423 +0000 UTC m=+985.450538603" observedRunningTime="2026-01-30 07:01:14.662042995 +0000 UTC m=+988.290395166" watchObservedRunningTime="2026-01-30 07:01:14.683968056 +0000 UTC m=+988.312320257" Jan 30 07:01:14 crc kubenswrapper[4520]: I0130 07:01:14.699172 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6bfcf6757f-bv4bw" podUID="da8fcfab-9e74-40b5-87a2-a771a93c64e3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.165:5353: i/o timeout" Jan 30 07:01:14 crc kubenswrapper[4520]: I0130 07:01:14.702721 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be78f3d6-9a68-4858-8d5b-a2fe0ea03050" path="/var/lib/kubelet/pods/be78f3d6-9a68-4858-8d5b-a2fe0ea03050/volumes" Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.428332 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.581278 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrlzz\" (UniqueName: \"kubernetes.io/projected/66e1918d-216f-47bb-abb8-3b9cf0c772e2-kube-api-access-nrlzz\") pod \"66e1918d-216f-47bb-abb8-3b9cf0c772e2\" (UID: \"66e1918d-216f-47bb-abb8-3b9cf0c772e2\") " Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.581354 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66e1918d-216f-47bb-abb8-3b9cf0c772e2-config-data-custom\") pod \"66e1918d-216f-47bb-abb8-3b9cf0c772e2\" (UID: \"66e1918d-216f-47bb-abb8-3b9cf0c772e2\") " Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.581439 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66e1918d-216f-47bb-abb8-3b9cf0c772e2-scripts\") pod \"66e1918d-216f-47bb-abb8-3b9cf0c772e2\" (UID: \"66e1918d-216f-47bb-abb8-3b9cf0c772e2\") " Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.581475 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/66e1918d-216f-47bb-abb8-3b9cf0c772e2-etc-machine-id\") pod \"66e1918d-216f-47bb-abb8-3b9cf0c772e2\" (UID: \"66e1918d-216f-47bb-abb8-3b9cf0c772e2\") " Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.581542 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66e1918d-216f-47bb-abb8-3b9cf0c772e2-logs\") pod \"66e1918d-216f-47bb-abb8-3b9cf0c772e2\" (UID: \"66e1918d-216f-47bb-abb8-3b9cf0c772e2\") " Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.581561 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66e1918d-216f-47bb-abb8-3b9cf0c772e2-combined-ca-bundle\") pod \"66e1918d-216f-47bb-abb8-3b9cf0c772e2\" (UID: \"66e1918d-216f-47bb-abb8-3b9cf0c772e2\") " Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.581668 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66e1918d-216f-47bb-abb8-3b9cf0c772e2-config-data\") pod \"66e1918d-216f-47bb-abb8-3b9cf0c772e2\" (UID: \"66e1918d-216f-47bb-abb8-3b9cf0c772e2\") " Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.581668 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66e1918d-216f-47bb-abb8-3b9cf0c772e2-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "66e1918d-216f-47bb-abb8-3b9cf0c772e2" (UID: "66e1918d-216f-47bb-abb8-3b9cf0c772e2"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.581913 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66e1918d-216f-47bb-abb8-3b9cf0c772e2-logs" (OuterVolumeSpecName: "logs") pod "66e1918d-216f-47bb-abb8-3b9cf0c772e2" (UID: "66e1918d-216f-47bb-abb8-3b9cf0c772e2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.582244 4520 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/66e1918d-216f-47bb-abb8-3b9cf0c772e2-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.582258 4520 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66e1918d-216f-47bb-abb8-3b9cf0c772e2-logs\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.592684 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66e1918d-216f-47bb-abb8-3b9cf0c772e2-scripts" (OuterVolumeSpecName: "scripts") pod "66e1918d-216f-47bb-abb8-3b9cf0c772e2" (UID: "66e1918d-216f-47bb-abb8-3b9cf0c772e2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.594123 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66e1918d-216f-47bb-abb8-3b9cf0c772e2-kube-api-access-nrlzz" (OuterVolumeSpecName: "kube-api-access-nrlzz") pod "66e1918d-216f-47bb-abb8-3b9cf0c772e2" (UID: "66e1918d-216f-47bb-abb8-3b9cf0c772e2"). InnerVolumeSpecName "kube-api-access-nrlzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.605962 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66e1918d-216f-47bb-abb8-3b9cf0c772e2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "66e1918d-216f-47bb-abb8-3b9cf0c772e2" (UID: "66e1918d-216f-47bb-abb8-3b9cf0c772e2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.617803 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66e1918d-216f-47bb-abb8-3b9cf0c772e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66e1918d-216f-47bb-abb8-3b9cf0c772e2" (UID: "66e1918d-216f-47bb-abb8-3b9cf0c772e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.642384 4520 generic.go:334] "Generic (PLEG): container finished" podID="66e1918d-216f-47bb-abb8-3b9cf0c772e2" containerID="fa9b4deef9400476b18ac76cf6c8c95d5063fa3d1dd27dc118a58f4005905cde" exitCode=0 Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.642421 4520 generic.go:334] "Generic (PLEG): container finished" podID="66e1918d-216f-47bb-abb8-3b9cf0c772e2" containerID="db74ba2f32a6eabaf757076a0f317b4b0f800c1d61676aae0cd230757650d255" exitCode=143 Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.643461 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"66e1918d-216f-47bb-abb8-3b9cf0c772e2","Type":"ContainerDied","Data":"fa9b4deef9400476b18ac76cf6c8c95d5063fa3d1dd27dc118a58f4005905cde"} Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.643540 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"66e1918d-216f-47bb-abb8-3b9cf0c772e2","Type":"ContainerDied","Data":"db74ba2f32a6eabaf757076a0f317b4b0f800c1d61676aae0cd230757650d255"} Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.643555 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"66e1918d-216f-47bb-abb8-3b9cf0c772e2","Type":"ContainerDied","Data":"805adb61b23fbeb439c8bc7603b7f3f8ada9aa21161b7b4deda466551590288a"} Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.643573 4520 scope.go:117] "RemoveContainer" containerID="fa9b4deef9400476b18ac76cf6c8c95d5063fa3d1dd27dc118a58f4005905cde" Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.643604 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.645616 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66e1918d-216f-47bb-abb8-3b9cf0c772e2-config-data" (OuterVolumeSpecName: "config-data") pod "66e1918d-216f-47bb-abb8-3b9cf0c772e2" (UID: "66e1918d-216f-47bb-abb8-3b9cf0c772e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.689754 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66e1918d-216f-47bb-abb8-3b9cf0c772e2-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.689786 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrlzz\" (UniqueName: \"kubernetes.io/projected/66e1918d-216f-47bb-abb8-3b9cf0c772e2-kube-api-access-nrlzz\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.689799 4520 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66e1918d-216f-47bb-abb8-3b9cf0c772e2-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.689810 4520 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66e1918d-216f-47bb-abb8-3b9cf0c772e2-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.689818 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66e1918d-216f-47bb-abb8-3b9cf0c772e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.691668 4520 scope.go:117] "RemoveContainer" containerID="db74ba2f32a6eabaf757076a0f317b4b0f800c1d61676aae0cd230757650d255" Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.732692 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-bf4fcb464-scxkz" Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.741790 4520 scope.go:117] "RemoveContainer" containerID="fa9b4deef9400476b18ac76cf6c8c95d5063fa3d1dd27dc118a58f4005905cde" Jan 30 07:01:15 crc kubenswrapper[4520]: E0130 07:01:15.742706 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa9b4deef9400476b18ac76cf6c8c95d5063fa3d1dd27dc118a58f4005905cde\": container with ID starting with fa9b4deef9400476b18ac76cf6c8c95d5063fa3d1dd27dc118a58f4005905cde not found: ID does not exist" containerID="fa9b4deef9400476b18ac76cf6c8c95d5063fa3d1dd27dc118a58f4005905cde" Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.742781 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa9b4deef9400476b18ac76cf6c8c95d5063fa3d1dd27dc118a58f4005905cde"} err="failed to get container status \"fa9b4deef9400476b18ac76cf6c8c95d5063fa3d1dd27dc118a58f4005905cde\": rpc error: code = NotFound desc = could not find container \"fa9b4deef9400476b18ac76cf6c8c95d5063fa3d1dd27dc118a58f4005905cde\": container with ID starting with fa9b4deef9400476b18ac76cf6c8c95d5063fa3d1dd27dc118a58f4005905cde not found: ID does not exist" Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.742854 4520 scope.go:117] "RemoveContainer" containerID="db74ba2f32a6eabaf757076a0f317b4b0f800c1d61676aae0cd230757650d255" Jan 30 07:01:15 crc kubenswrapper[4520]: E0130 07:01:15.745547 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db74ba2f32a6eabaf757076a0f317b4b0f800c1d61676aae0cd230757650d255\": container with ID starting with db74ba2f32a6eabaf757076a0f317b4b0f800c1d61676aae0cd230757650d255 not found: ID does not exist" containerID="db74ba2f32a6eabaf757076a0f317b4b0f800c1d61676aae0cd230757650d255" Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.745682 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db74ba2f32a6eabaf757076a0f317b4b0f800c1d61676aae0cd230757650d255"} err="failed to get container status \"db74ba2f32a6eabaf757076a0f317b4b0f800c1d61676aae0cd230757650d255\": rpc error: code = NotFound desc = could not find container \"db74ba2f32a6eabaf757076a0f317b4b0f800c1d61676aae0cd230757650d255\": container with ID starting with db74ba2f32a6eabaf757076a0f317b4b0f800c1d61676aae0cd230757650d255 not found: ID does not exist" Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.745747 4520 scope.go:117] "RemoveContainer" containerID="fa9b4deef9400476b18ac76cf6c8c95d5063fa3d1dd27dc118a58f4005905cde" Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.748674 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa9b4deef9400476b18ac76cf6c8c95d5063fa3d1dd27dc118a58f4005905cde"} err="failed to get container status \"fa9b4deef9400476b18ac76cf6c8c95d5063fa3d1dd27dc118a58f4005905cde\": rpc error: code = NotFound desc = could not find container \"fa9b4deef9400476b18ac76cf6c8c95d5063fa3d1dd27dc118a58f4005905cde\": container with ID starting with fa9b4deef9400476b18ac76cf6c8c95d5063fa3d1dd27dc118a58f4005905cde not found: ID does not exist" Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.748811 4520 scope.go:117] "RemoveContainer" containerID="db74ba2f32a6eabaf757076a0f317b4b0f800c1d61676aae0cd230757650d255" Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.749413 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db74ba2f32a6eabaf757076a0f317b4b0f800c1d61676aae0cd230757650d255"} err="failed to get container status \"db74ba2f32a6eabaf757076a0f317b4b0f800c1d61676aae0cd230757650d255\": rpc error: code = NotFound desc = could not find container \"db74ba2f32a6eabaf757076a0f317b4b0f800c1d61676aae0cd230757650d255\": container with ID starting with db74ba2f32a6eabaf757076a0f317b4b0f800c1d61676aae0cd230757650d255 not found: ID does not exist" Jan 30 07:01:15 crc kubenswrapper[4520]: I0130 07:01:15.765318 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-bf4fcb464-scxkz" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.029563 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.044038 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.064078 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7d85f5b788-9fjcm"] Jan 30 07:01:16 crc kubenswrapper[4520]: E0130 07:01:16.064579 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07aa3f61-cfcb-4aa2-8430-e4f800dbf572" containerName="neutron-httpd" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.064599 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="07aa3f61-cfcb-4aa2-8430-e4f800dbf572" containerName="neutron-httpd" Jan 30 07:01:16 crc kubenswrapper[4520]: E0130 07:01:16.064612 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66e1918d-216f-47bb-abb8-3b9cf0c772e2" containerName="cinder-api" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.064618 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="66e1918d-216f-47bb-abb8-3b9cf0c772e2" containerName="cinder-api" Jan 30 07:01:16 crc kubenswrapper[4520]: E0130 07:01:16.064627 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da8fcfab-9e74-40b5-87a2-a771a93c64e3" containerName="dnsmasq-dns" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.064634 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="da8fcfab-9e74-40b5-87a2-a771a93c64e3" containerName="dnsmasq-dns" Jan 30 07:01:16 crc kubenswrapper[4520]: E0130 07:01:16.064647 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aa70617-4bf1-4555-886c-988e24cd5198" containerName="keystone-cron" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.064653 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aa70617-4bf1-4555-886c-988e24cd5198" containerName="keystone-cron" Jan 30 07:01:16 crc kubenswrapper[4520]: E0130 07:01:16.064662 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66e1918d-216f-47bb-abb8-3b9cf0c772e2" containerName="cinder-api-log" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.064669 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="66e1918d-216f-47bb-abb8-3b9cf0c772e2" containerName="cinder-api-log" Jan 30 07:01:16 crc kubenswrapper[4520]: E0130 07:01:16.064680 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da8fcfab-9e74-40b5-87a2-a771a93c64e3" containerName="init" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.064685 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="da8fcfab-9e74-40b5-87a2-a771a93c64e3" containerName="init" Jan 30 07:01:16 crc kubenswrapper[4520]: E0130 07:01:16.064697 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be78f3d6-9a68-4858-8d5b-a2fe0ea03050" containerName="extract-utilities" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.064702 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="be78f3d6-9a68-4858-8d5b-a2fe0ea03050" containerName="extract-utilities" Jan 30 07:01:16 crc kubenswrapper[4520]: E0130 07:01:16.064716 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07aa3f61-cfcb-4aa2-8430-e4f800dbf572" containerName="neutron-api" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.064721 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="07aa3f61-cfcb-4aa2-8430-e4f800dbf572" containerName="neutron-api" Jan 30 07:01:16 crc kubenswrapper[4520]: E0130 07:01:16.064728 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be78f3d6-9a68-4858-8d5b-a2fe0ea03050" containerName="registry-server" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.064735 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="be78f3d6-9a68-4858-8d5b-a2fe0ea03050" containerName="registry-server" Jan 30 07:01:16 crc kubenswrapper[4520]: E0130 07:01:16.064746 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be78f3d6-9a68-4858-8d5b-a2fe0ea03050" containerName="extract-content" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.064751 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="be78f3d6-9a68-4858-8d5b-a2fe0ea03050" containerName="extract-content" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.064913 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="07aa3f61-cfcb-4aa2-8430-e4f800dbf572" containerName="neutron-httpd" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.064926 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="5aa70617-4bf1-4555-886c-988e24cd5198" containerName="keystone-cron" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.064934 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="be78f3d6-9a68-4858-8d5b-a2fe0ea03050" containerName="registry-server" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.064950 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="da8fcfab-9e74-40b5-87a2-a771a93c64e3" containerName="dnsmasq-dns" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.064959 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="07aa3f61-cfcb-4aa2-8430-e4f800dbf572" containerName="neutron-api" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.064967 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="66e1918d-216f-47bb-abb8-3b9cf0c772e2" containerName="cinder-api" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.064973 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="66e1918d-216f-47bb-abb8-3b9cf0c772e2" containerName="cinder-api-log" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.065917 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7d85f5b788-9fjcm" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.074544 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.076634 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.089097 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.089307 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.089447 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.107282 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7d85f5b788-9fjcm"] Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.118469 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.236657 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34ccd84a-cc7c-4722-9873-dc7d2c816c0d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"34ccd84a-cc7c-4722-9873-dc7d2c816c0d\") " pod="openstack/cinder-api-0" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.236834 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5b8m\" (UniqueName: \"kubernetes.io/projected/e439f3dd-60bf-4740-b282-05179f982029-kube-api-access-k5b8m\") pod \"placement-7d85f5b788-9fjcm\" (UID: \"e439f3dd-60bf-4740-b282-05179f982029\") " pod="openstack/placement-7d85f5b788-9fjcm" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.236926 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9k87\" (UniqueName: \"kubernetes.io/projected/34ccd84a-cc7c-4722-9873-dc7d2c816c0d-kube-api-access-v9k87\") pod \"cinder-api-0\" (UID: \"34ccd84a-cc7c-4722-9873-dc7d2c816c0d\") " pod="openstack/cinder-api-0" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.237033 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e439f3dd-60bf-4740-b282-05179f982029-combined-ca-bundle\") pod \"placement-7d85f5b788-9fjcm\" (UID: \"e439f3dd-60bf-4740-b282-05179f982029\") " pod="openstack/placement-7d85f5b788-9fjcm" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.237125 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e439f3dd-60bf-4740-b282-05179f982029-config-data\") pod \"placement-7d85f5b788-9fjcm\" (UID: \"e439f3dd-60bf-4740-b282-05179f982029\") " pod="openstack/placement-7d85f5b788-9fjcm" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.237217 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34ccd84a-cc7c-4722-9873-dc7d2c816c0d-config-data\") pod \"cinder-api-0\" (UID: \"34ccd84a-cc7c-4722-9873-dc7d2c816c0d\") " pod="openstack/cinder-api-0" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.237313 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e439f3dd-60bf-4740-b282-05179f982029-logs\") pod \"placement-7d85f5b788-9fjcm\" (UID: \"e439f3dd-60bf-4740-b282-05179f982029\") " pod="openstack/placement-7d85f5b788-9fjcm" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.237381 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e439f3dd-60bf-4740-b282-05179f982029-public-tls-certs\") pod \"placement-7d85f5b788-9fjcm\" (UID: \"e439f3dd-60bf-4740-b282-05179f982029\") " pod="openstack/placement-7d85f5b788-9fjcm" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.237447 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e439f3dd-60bf-4740-b282-05179f982029-scripts\") pod \"placement-7d85f5b788-9fjcm\" (UID: \"e439f3dd-60bf-4740-b282-05179f982029\") " pod="openstack/placement-7d85f5b788-9fjcm" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.237629 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/34ccd84a-cc7c-4722-9873-dc7d2c816c0d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"34ccd84a-cc7c-4722-9873-dc7d2c816c0d\") " pod="openstack/cinder-api-0" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.237701 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/34ccd84a-cc7c-4722-9873-dc7d2c816c0d-config-data-custom\") pod \"cinder-api-0\" (UID: \"34ccd84a-cc7c-4722-9873-dc7d2c816c0d\") " pod="openstack/cinder-api-0" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.237762 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e439f3dd-60bf-4740-b282-05179f982029-internal-tls-certs\") pod \"placement-7d85f5b788-9fjcm\" (UID: \"e439f3dd-60bf-4740-b282-05179f982029\") " pod="openstack/placement-7d85f5b788-9fjcm" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.237786 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34ccd84a-cc7c-4722-9873-dc7d2c816c0d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"34ccd84a-cc7c-4722-9873-dc7d2c816c0d\") " pod="openstack/cinder-api-0" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.237833 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34ccd84a-cc7c-4722-9873-dc7d2c816c0d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"34ccd84a-cc7c-4722-9873-dc7d2c816c0d\") " pod="openstack/cinder-api-0" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.237865 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34ccd84a-cc7c-4722-9873-dc7d2c816c0d-scripts\") pod \"cinder-api-0\" (UID: \"34ccd84a-cc7c-4722-9873-dc7d2c816c0d\") " pod="openstack/cinder-api-0" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.237891 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34ccd84a-cc7c-4722-9873-dc7d2c816c0d-logs\") pod \"cinder-api-0\" (UID: \"34ccd84a-cc7c-4722-9873-dc7d2c816c0d\") " pod="openstack/cinder-api-0" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.339398 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/34ccd84a-cc7c-4722-9873-dc7d2c816c0d-config-data-custom\") pod \"cinder-api-0\" (UID: \"34ccd84a-cc7c-4722-9873-dc7d2c816c0d\") " pod="openstack/cinder-api-0" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.339452 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e439f3dd-60bf-4740-b282-05179f982029-internal-tls-certs\") pod \"placement-7d85f5b788-9fjcm\" (UID: \"e439f3dd-60bf-4740-b282-05179f982029\") " pod="openstack/placement-7d85f5b788-9fjcm" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.339471 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34ccd84a-cc7c-4722-9873-dc7d2c816c0d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"34ccd84a-cc7c-4722-9873-dc7d2c816c0d\") " pod="openstack/cinder-api-0" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.339501 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34ccd84a-cc7c-4722-9873-dc7d2c816c0d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"34ccd84a-cc7c-4722-9873-dc7d2c816c0d\") " pod="openstack/cinder-api-0" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.339543 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34ccd84a-cc7c-4722-9873-dc7d2c816c0d-scripts\") pod \"cinder-api-0\" (UID: \"34ccd84a-cc7c-4722-9873-dc7d2c816c0d\") " pod="openstack/cinder-api-0" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.339564 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34ccd84a-cc7c-4722-9873-dc7d2c816c0d-logs\") pod \"cinder-api-0\" (UID: \"34ccd84a-cc7c-4722-9873-dc7d2c816c0d\") " pod="openstack/cinder-api-0" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.339615 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34ccd84a-cc7c-4722-9873-dc7d2c816c0d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"34ccd84a-cc7c-4722-9873-dc7d2c816c0d\") " pod="openstack/cinder-api-0" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.339640 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5b8m\" (UniqueName: \"kubernetes.io/projected/e439f3dd-60bf-4740-b282-05179f982029-kube-api-access-k5b8m\") pod \"placement-7d85f5b788-9fjcm\" (UID: \"e439f3dd-60bf-4740-b282-05179f982029\") " pod="openstack/placement-7d85f5b788-9fjcm" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.339662 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9k87\" (UniqueName: \"kubernetes.io/projected/34ccd84a-cc7c-4722-9873-dc7d2c816c0d-kube-api-access-v9k87\") pod \"cinder-api-0\" (UID: \"34ccd84a-cc7c-4722-9873-dc7d2c816c0d\") " pod="openstack/cinder-api-0" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.339688 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e439f3dd-60bf-4740-b282-05179f982029-combined-ca-bundle\") pod \"placement-7d85f5b788-9fjcm\" (UID: \"e439f3dd-60bf-4740-b282-05179f982029\") " pod="openstack/placement-7d85f5b788-9fjcm" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.339711 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e439f3dd-60bf-4740-b282-05179f982029-config-data\") pod \"placement-7d85f5b788-9fjcm\" (UID: \"e439f3dd-60bf-4740-b282-05179f982029\") " pod="openstack/placement-7d85f5b788-9fjcm" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.339729 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34ccd84a-cc7c-4722-9873-dc7d2c816c0d-config-data\") pod \"cinder-api-0\" (UID: \"34ccd84a-cc7c-4722-9873-dc7d2c816c0d\") " pod="openstack/cinder-api-0" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.339752 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e439f3dd-60bf-4740-b282-05179f982029-logs\") pod \"placement-7d85f5b788-9fjcm\" (UID: \"e439f3dd-60bf-4740-b282-05179f982029\") " pod="openstack/placement-7d85f5b788-9fjcm" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.339765 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e439f3dd-60bf-4740-b282-05179f982029-public-tls-certs\") pod \"placement-7d85f5b788-9fjcm\" (UID: \"e439f3dd-60bf-4740-b282-05179f982029\") " pod="openstack/placement-7d85f5b788-9fjcm" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.339781 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e439f3dd-60bf-4740-b282-05179f982029-scripts\") pod \"placement-7d85f5b788-9fjcm\" (UID: \"e439f3dd-60bf-4740-b282-05179f982029\") " pod="openstack/placement-7d85f5b788-9fjcm" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.339798 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/34ccd84a-cc7c-4722-9873-dc7d2c816c0d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"34ccd84a-cc7c-4722-9873-dc7d2c816c0d\") " pod="openstack/cinder-api-0" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.339865 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/34ccd84a-cc7c-4722-9873-dc7d2c816c0d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"34ccd84a-cc7c-4722-9873-dc7d2c816c0d\") " pod="openstack/cinder-api-0" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.351171 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34ccd84a-cc7c-4722-9873-dc7d2c816c0d-logs\") pod \"cinder-api-0\" (UID: \"34ccd84a-cc7c-4722-9873-dc7d2c816c0d\") " pod="openstack/cinder-api-0" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.352474 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e439f3dd-60bf-4740-b282-05179f982029-config-data\") pod \"placement-7d85f5b788-9fjcm\" (UID: \"e439f3dd-60bf-4740-b282-05179f982029\") " pod="openstack/placement-7d85f5b788-9fjcm" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.353404 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/34ccd84a-cc7c-4722-9873-dc7d2c816c0d-config-data-custom\") pod \"cinder-api-0\" (UID: \"34ccd84a-cc7c-4722-9873-dc7d2c816c0d\") " pod="openstack/cinder-api-0" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.353473 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e439f3dd-60bf-4740-b282-05179f982029-combined-ca-bundle\") pod \"placement-7d85f5b788-9fjcm\" (UID: \"e439f3dd-60bf-4740-b282-05179f982029\") " pod="openstack/placement-7d85f5b788-9fjcm" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.356859 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e439f3dd-60bf-4740-b282-05179f982029-logs\") pod \"placement-7d85f5b788-9fjcm\" (UID: \"e439f3dd-60bf-4740-b282-05179f982029\") " pod="openstack/placement-7d85f5b788-9fjcm" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.357325 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34ccd84a-cc7c-4722-9873-dc7d2c816c0d-scripts\") pod \"cinder-api-0\" (UID: \"34ccd84a-cc7c-4722-9873-dc7d2c816c0d\") " pod="openstack/cinder-api-0" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.364086 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34ccd84a-cc7c-4722-9873-dc7d2c816c0d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"34ccd84a-cc7c-4722-9873-dc7d2c816c0d\") " pod="openstack/cinder-api-0" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.366079 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34ccd84a-cc7c-4722-9873-dc7d2c816c0d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"34ccd84a-cc7c-4722-9873-dc7d2c816c0d\") " pod="openstack/cinder-api-0" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.366760 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34ccd84a-cc7c-4722-9873-dc7d2c816c0d-config-data\") pod \"cinder-api-0\" (UID: \"34ccd84a-cc7c-4722-9873-dc7d2c816c0d\") " pod="openstack/cinder-api-0" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.370981 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e439f3dd-60bf-4740-b282-05179f982029-public-tls-certs\") pod \"placement-7d85f5b788-9fjcm\" (UID: \"e439f3dd-60bf-4740-b282-05179f982029\") " pod="openstack/placement-7d85f5b788-9fjcm" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.375855 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e439f3dd-60bf-4740-b282-05179f982029-scripts\") pod \"placement-7d85f5b788-9fjcm\" (UID: \"e439f3dd-60bf-4740-b282-05179f982029\") " pod="openstack/placement-7d85f5b788-9fjcm" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.376350 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34ccd84a-cc7c-4722-9873-dc7d2c816c0d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"34ccd84a-cc7c-4722-9873-dc7d2c816c0d\") " pod="openstack/cinder-api-0" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.376898 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e439f3dd-60bf-4740-b282-05179f982029-internal-tls-certs\") pod \"placement-7d85f5b788-9fjcm\" (UID: \"e439f3dd-60bf-4740-b282-05179f982029\") " pod="openstack/placement-7d85f5b788-9fjcm" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.388129 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5b8m\" (UniqueName: \"kubernetes.io/projected/e439f3dd-60bf-4740-b282-05179f982029-kube-api-access-k5b8m\") pod \"placement-7d85f5b788-9fjcm\" (UID: \"e439f3dd-60bf-4740-b282-05179f982029\") " pod="openstack/placement-7d85f5b788-9fjcm" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.402525 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7d85f5b788-9fjcm" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.403037 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9k87\" (UniqueName: \"kubernetes.io/projected/34ccd84a-cc7c-4722-9873-dc7d2c816c0d-kube-api-access-v9k87\") pod \"cinder-api-0\" (UID: \"34ccd84a-cc7c-4722-9873-dc7d2c816c0d\") " pod="openstack/cinder-api-0" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.407640 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 07:01:16 crc kubenswrapper[4520]: I0130 07:01:16.702015 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66e1918d-216f-47bb-abb8-3b9cf0c772e2" path="/var/lib/kubelet/pods/66e1918d-216f-47bb-abb8-3b9cf0c772e2/volumes" Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.221092 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7d85f5b788-9fjcm"] Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.334734 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.571338 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-c459697cb-g922m" Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.604422 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.678242 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"34ccd84a-cc7c-4722-9873-dc7d2c816c0d","Type":"ContainerStarted","Data":"703a57d71b0ecde4298f3d87ec13073a183851f419f4d261bd4056ca0559cc35"} Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.685086 4520 generic.go:334] "Generic (PLEG): container finished" podID="4efe190c-047a-4463-9044-515816c2a7e1" containerID="047e969204cfcb2f6398ccfc5932c7060b7a8b55c1a6c94163906219a8c6de03" exitCode=0 Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.685155 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4efe190c-047a-4463-9044-515816c2a7e1","Type":"ContainerDied","Data":"047e969204cfcb2f6398ccfc5932c7060b7a8b55c1a6c94163906219a8c6de03"} Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.685187 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4efe190c-047a-4463-9044-515816c2a7e1","Type":"ContainerDied","Data":"962e08117bef72825961dd4e0f0e2d8765d7ac5606e348815111c653963b0c4f"} Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.685204 4520 scope.go:117] "RemoveContainer" containerID="ab7c08275975f588088eb7f95ddb84450d1703b061e2d35dedc379c7e583433c" Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.685377 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.696409 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d85f5b788-9fjcm" event={"ID":"e439f3dd-60bf-4740-b282-05179f982029","Type":"ContainerStarted","Data":"d43c4757e4655bc489e4fef06246dcdee0b78e7f3ee338645589fe6fbc789ef6"} Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.696564 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d85f5b788-9fjcm" event={"ID":"e439f3dd-60bf-4740-b282-05179f982029","Type":"ContainerStarted","Data":"24ce61d7ce3628952023a5d0fbda2e8ede23c6632c014ab5d1b0d9d223947fa2"} Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.716754 4520 scope.go:117] "RemoveContainer" containerID="b126a1dcbdaa0eaa43f16ac1da4cb06c30fc0fc7e894f73eec11a72c209753e8" Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.737813 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-d9dd85bbd-2g75n" Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.774259 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4efe190c-047a-4463-9044-515816c2a7e1-sg-core-conf-yaml\") pod \"4efe190c-047a-4463-9044-515816c2a7e1\" (UID: \"4efe190c-047a-4463-9044-515816c2a7e1\") " Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.774314 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4efe190c-047a-4463-9044-515816c2a7e1-scripts\") pod \"4efe190c-047a-4463-9044-515816c2a7e1\" (UID: \"4efe190c-047a-4463-9044-515816c2a7e1\") " Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.774391 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4efe190c-047a-4463-9044-515816c2a7e1-log-httpd\") pod \"4efe190c-047a-4463-9044-515816c2a7e1\" (UID: \"4efe190c-047a-4463-9044-515816c2a7e1\") " Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.774425 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4efe190c-047a-4463-9044-515816c2a7e1-run-httpd\") pod \"4efe190c-047a-4463-9044-515816c2a7e1\" (UID: \"4efe190c-047a-4463-9044-515816c2a7e1\") " Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.774832 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4efe190c-047a-4463-9044-515816c2a7e1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4efe190c-047a-4463-9044-515816c2a7e1" (UID: "4efe190c-047a-4463-9044-515816c2a7e1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.775073 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4efe190c-047a-4463-9044-515816c2a7e1-combined-ca-bundle\") pod \"4efe190c-047a-4463-9044-515816c2a7e1\" (UID: \"4efe190c-047a-4463-9044-515816c2a7e1\") " Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.775131 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2ts2\" (UniqueName: \"kubernetes.io/projected/4efe190c-047a-4463-9044-515816c2a7e1-kube-api-access-s2ts2\") pod \"4efe190c-047a-4463-9044-515816c2a7e1\" (UID: \"4efe190c-047a-4463-9044-515816c2a7e1\") " Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.775174 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4efe190c-047a-4463-9044-515816c2a7e1-config-data\") pod \"4efe190c-047a-4463-9044-515816c2a7e1\" (UID: \"4efe190c-047a-4463-9044-515816c2a7e1\") " Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.775578 4520 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4efe190c-047a-4463-9044-515816c2a7e1-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.781579 4520 scope.go:117] "RemoveContainer" containerID="047e969204cfcb2f6398ccfc5932c7060b7a8b55c1a6c94163906219a8c6de03" Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.782397 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4efe190c-047a-4463-9044-515816c2a7e1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4efe190c-047a-4463-9044-515816c2a7e1" (UID: "4efe190c-047a-4463-9044-515816c2a7e1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.797763 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4efe190c-047a-4463-9044-515816c2a7e1-kube-api-access-s2ts2" (OuterVolumeSpecName: "kube-api-access-s2ts2") pod "4efe190c-047a-4463-9044-515816c2a7e1" (UID: "4efe190c-047a-4463-9044-515816c2a7e1"). InnerVolumeSpecName "kube-api-access-s2ts2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.821822 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4efe190c-047a-4463-9044-515816c2a7e1-scripts" (OuterVolumeSpecName: "scripts") pod "4efe190c-047a-4463-9044-515816c2a7e1" (UID: "4efe190c-047a-4463-9044-515816c2a7e1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.837396 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4efe190c-047a-4463-9044-515816c2a7e1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4efe190c-047a-4463-9044-515816c2a7e1" (UID: "4efe190c-047a-4463-9044-515816c2a7e1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.867683 4520 scope.go:117] "RemoveContainer" containerID="21536f66b139a50e5fd8cfe52814b014f0f4fa2d3f0e61b68d8c97bd5b1ea26f" Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.879343 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2ts2\" (UniqueName: \"kubernetes.io/projected/4efe190c-047a-4463-9044-515816c2a7e1-kube-api-access-s2ts2\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.879385 4520 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4efe190c-047a-4463-9044-515816c2a7e1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.879396 4520 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4efe190c-047a-4463-9044-515816c2a7e1-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.879406 4520 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4efe190c-047a-4463-9044-515816c2a7e1-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.916699 4520 scope.go:117] "RemoveContainer" containerID="ab7c08275975f588088eb7f95ddb84450d1703b061e2d35dedc379c7e583433c" Jan 30 07:01:17 crc kubenswrapper[4520]: E0130 07:01:17.917613 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab7c08275975f588088eb7f95ddb84450d1703b061e2d35dedc379c7e583433c\": container with ID starting with ab7c08275975f588088eb7f95ddb84450d1703b061e2d35dedc379c7e583433c not found: ID does not exist" containerID="ab7c08275975f588088eb7f95ddb84450d1703b061e2d35dedc379c7e583433c" Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.917650 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab7c08275975f588088eb7f95ddb84450d1703b061e2d35dedc379c7e583433c"} err="failed to get container status \"ab7c08275975f588088eb7f95ddb84450d1703b061e2d35dedc379c7e583433c\": rpc error: code = NotFound desc = could not find container \"ab7c08275975f588088eb7f95ddb84450d1703b061e2d35dedc379c7e583433c\": container with ID starting with ab7c08275975f588088eb7f95ddb84450d1703b061e2d35dedc379c7e583433c not found: ID does not exist" Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.917677 4520 scope.go:117] "RemoveContainer" containerID="b126a1dcbdaa0eaa43f16ac1da4cb06c30fc0fc7e894f73eec11a72c209753e8" Jan 30 07:01:17 crc kubenswrapper[4520]: E0130 07:01:17.918934 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b126a1dcbdaa0eaa43f16ac1da4cb06c30fc0fc7e894f73eec11a72c209753e8\": container with ID starting with b126a1dcbdaa0eaa43f16ac1da4cb06c30fc0fc7e894f73eec11a72c209753e8 not found: ID does not exist" containerID="b126a1dcbdaa0eaa43f16ac1da4cb06c30fc0fc7e894f73eec11a72c209753e8" Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.919029 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b126a1dcbdaa0eaa43f16ac1da4cb06c30fc0fc7e894f73eec11a72c209753e8"} err="failed to get container status \"b126a1dcbdaa0eaa43f16ac1da4cb06c30fc0fc7e894f73eec11a72c209753e8\": rpc error: code = NotFound desc = could not find container \"b126a1dcbdaa0eaa43f16ac1da4cb06c30fc0fc7e894f73eec11a72c209753e8\": container with ID starting with b126a1dcbdaa0eaa43f16ac1da4cb06c30fc0fc7e894f73eec11a72c209753e8 not found: ID does not exist" Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.919107 4520 scope.go:117] "RemoveContainer" containerID="047e969204cfcb2f6398ccfc5932c7060b7a8b55c1a6c94163906219a8c6de03" Jan 30 07:01:17 crc kubenswrapper[4520]: E0130 07:01:17.919812 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"047e969204cfcb2f6398ccfc5932c7060b7a8b55c1a6c94163906219a8c6de03\": container with ID starting with 047e969204cfcb2f6398ccfc5932c7060b7a8b55c1a6c94163906219a8c6de03 not found: ID does not exist" containerID="047e969204cfcb2f6398ccfc5932c7060b7a8b55c1a6c94163906219a8c6de03" Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.919845 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"047e969204cfcb2f6398ccfc5932c7060b7a8b55c1a6c94163906219a8c6de03"} err="failed to get container status \"047e969204cfcb2f6398ccfc5932c7060b7a8b55c1a6c94163906219a8c6de03\": rpc error: code = NotFound desc = could not find container \"047e969204cfcb2f6398ccfc5932c7060b7a8b55c1a6c94163906219a8c6de03\": container with ID starting with 047e969204cfcb2f6398ccfc5932c7060b7a8b55c1a6c94163906219a8c6de03 not found: ID does not exist" Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.919871 4520 scope.go:117] "RemoveContainer" containerID="21536f66b139a50e5fd8cfe52814b014f0f4fa2d3f0e61b68d8c97bd5b1ea26f" Jan 30 07:01:17 crc kubenswrapper[4520]: E0130 07:01:17.920194 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21536f66b139a50e5fd8cfe52814b014f0f4fa2d3f0e61b68d8c97bd5b1ea26f\": container with ID starting with 21536f66b139a50e5fd8cfe52814b014f0f4fa2d3f0e61b68d8c97bd5b1ea26f not found: ID does not exist" containerID="21536f66b139a50e5fd8cfe52814b014f0f4fa2d3f0e61b68d8c97bd5b1ea26f" Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.920227 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21536f66b139a50e5fd8cfe52814b014f0f4fa2d3f0e61b68d8c97bd5b1ea26f"} err="failed to get container status \"21536f66b139a50e5fd8cfe52814b014f0f4fa2d3f0e61b68d8c97bd5b1ea26f\": rpc error: code = NotFound desc = could not find container \"21536f66b139a50e5fd8cfe52814b014f0f4fa2d3f0e61b68d8c97bd5b1ea26f\": container with ID starting with 21536f66b139a50e5fd8cfe52814b014f0f4fa2d3f0e61b68d8c97bd5b1ea26f not found: ID does not exist" Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.924742 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c548f5455-gc5z9" Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.968683 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4efe190c-047a-4463-9044-515816c2a7e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4efe190c-047a-4463-9044-515816c2a7e1" (UID: "4efe190c-047a-4463-9044-515816c2a7e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.984357 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4efe190c-047a-4463-9044-515816c2a7e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:17 crc kubenswrapper[4520]: I0130 07:01:17.988239 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.057721 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-549d55ddbc-cfmfx"] Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.058176 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-549d55ddbc-cfmfx" podUID="2336abfe-2191-4b5f-92bd-2077f6051a52" containerName="dnsmasq-dns" containerID="cri-o://e356745ca61bbc6db9c0e312560655ef1ecbaa123a0d96b37987d4a3c5aa44c3" gracePeriod=10 Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.076718 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4efe190c-047a-4463-9044-515816c2a7e1-config-data" (OuterVolumeSpecName: "config-data") pod "4efe190c-047a-4463-9044-515816c2a7e1" (UID: "4efe190c-047a-4463-9044-515816c2a7e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.086277 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4efe190c-047a-4463-9044-515816c2a7e1-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.333406 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.349183 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.359549 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:01:18 crc kubenswrapper[4520]: E0130 07:01:18.360115 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4efe190c-047a-4463-9044-515816c2a7e1" containerName="ceilometer-notification-agent" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.360135 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="4efe190c-047a-4463-9044-515816c2a7e1" containerName="ceilometer-notification-agent" Jan 30 07:01:18 crc kubenswrapper[4520]: E0130 07:01:18.360150 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4efe190c-047a-4463-9044-515816c2a7e1" containerName="proxy-httpd" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.360156 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="4efe190c-047a-4463-9044-515816c2a7e1" containerName="proxy-httpd" Jan 30 07:01:18 crc kubenswrapper[4520]: E0130 07:01:18.360202 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4efe190c-047a-4463-9044-515816c2a7e1" containerName="sg-core" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.360209 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="4efe190c-047a-4463-9044-515816c2a7e1" containerName="sg-core" Jan 30 07:01:18 crc kubenswrapper[4520]: E0130 07:01:18.360217 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4efe190c-047a-4463-9044-515816c2a7e1" containerName="ceilometer-central-agent" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.360222 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="4efe190c-047a-4463-9044-515816c2a7e1" containerName="ceilometer-central-agent" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.360443 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="4efe190c-047a-4463-9044-515816c2a7e1" containerName="proxy-httpd" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.360457 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="4efe190c-047a-4463-9044-515816c2a7e1" containerName="ceilometer-central-agent" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.360469 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="4efe190c-047a-4463-9044-515816c2a7e1" containerName="sg-core" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.360477 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="4efe190c-047a-4463-9044-515816c2a7e1" containerName="ceilometer-notification-agent" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.367203 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.371849 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.384178 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.409427 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b1a05da-505e-4ad3-8aba-596235eba06c-config-data\") pod \"ceilometer-0\" (UID: \"4b1a05da-505e-4ad3-8aba-596235eba06c\") " pod="openstack/ceilometer-0" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.409708 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b1a05da-505e-4ad3-8aba-596235eba06c-scripts\") pod \"ceilometer-0\" (UID: \"4b1a05da-505e-4ad3-8aba-596235eba06c\") " pod="openstack/ceilometer-0" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.409830 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b1a05da-505e-4ad3-8aba-596235eba06c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4b1a05da-505e-4ad3-8aba-596235eba06c\") " pod="openstack/ceilometer-0" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.409922 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b1a05da-505e-4ad3-8aba-596235eba06c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4b1a05da-505e-4ad3-8aba-596235eba06c\") " pod="openstack/ceilometer-0" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.410015 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b1a05da-505e-4ad3-8aba-596235eba06c-log-httpd\") pod \"ceilometer-0\" (UID: \"4b1a05da-505e-4ad3-8aba-596235eba06c\") " pod="openstack/ceilometer-0" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.410088 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b1a05da-505e-4ad3-8aba-596235eba06c-run-httpd\") pod \"ceilometer-0\" (UID: \"4b1a05da-505e-4ad3-8aba-596235eba06c\") " pod="openstack/ceilometer-0" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.410313 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhr26\" (UniqueName: \"kubernetes.io/projected/4b1a05da-505e-4ad3-8aba-596235eba06c-kube-api-access-bhr26\") pod \"ceilometer-0\" (UID: \"4b1a05da-505e-4ad3-8aba-596235eba06c\") " pod="openstack/ceilometer-0" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.451593 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.511301 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b1a05da-505e-4ad3-8aba-596235eba06c-config-data\") pod \"ceilometer-0\" (UID: \"4b1a05da-505e-4ad3-8aba-596235eba06c\") " pod="openstack/ceilometer-0" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.511338 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b1a05da-505e-4ad3-8aba-596235eba06c-scripts\") pod \"ceilometer-0\" (UID: \"4b1a05da-505e-4ad3-8aba-596235eba06c\") " pod="openstack/ceilometer-0" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.511384 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b1a05da-505e-4ad3-8aba-596235eba06c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4b1a05da-505e-4ad3-8aba-596235eba06c\") " pod="openstack/ceilometer-0" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.511415 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b1a05da-505e-4ad3-8aba-596235eba06c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4b1a05da-505e-4ad3-8aba-596235eba06c\") " pod="openstack/ceilometer-0" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.511446 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b1a05da-505e-4ad3-8aba-596235eba06c-log-httpd\") pod \"ceilometer-0\" (UID: \"4b1a05da-505e-4ad3-8aba-596235eba06c\") " pod="openstack/ceilometer-0" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.511471 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b1a05da-505e-4ad3-8aba-596235eba06c-run-httpd\") pod \"ceilometer-0\" (UID: \"4b1a05da-505e-4ad3-8aba-596235eba06c\") " pod="openstack/ceilometer-0" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.511494 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhr26\" (UniqueName: \"kubernetes.io/projected/4b1a05da-505e-4ad3-8aba-596235eba06c-kube-api-access-bhr26\") pod \"ceilometer-0\" (UID: \"4b1a05da-505e-4ad3-8aba-596235eba06c\") " pod="openstack/ceilometer-0" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.519964 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b1a05da-505e-4ad3-8aba-596235eba06c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4b1a05da-505e-4ad3-8aba-596235eba06c\") " pod="openstack/ceilometer-0" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.520415 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b1a05da-505e-4ad3-8aba-596235eba06c-log-httpd\") pod \"ceilometer-0\" (UID: \"4b1a05da-505e-4ad3-8aba-596235eba06c\") " pod="openstack/ceilometer-0" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.520674 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b1a05da-505e-4ad3-8aba-596235eba06c-run-httpd\") pod \"ceilometer-0\" (UID: \"4b1a05da-505e-4ad3-8aba-596235eba06c\") " pod="openstack/ceilometer-0" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.524545 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b1a05da-505e-4ad3-8aba-596235eba06c-scripts\") pod \"ceilometer-0\" (UID: \"4b1a05da-505e-4ad3-8aba-596235eba06c\") " pod="openstack/ceilometer-0" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.526269 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b1a05da-505e-4ad3-8aba-596235eba06c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4b1a05da-505e-4ad3-8aba-596235eba06c\") " pod="openstack/ceilometer-0" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.527893 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b1a05da-505e-4ad3-8aba-596235eba06c-config-data\") pod \"ceilometer-0\" (UID: \"4b1a05da-505e-4ad3-8aba-596235eba06c\") " pod="openstack/ceilometer-0" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.534049 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhr26\" (UniqueName: \"kubernetes.io/projected/4b1a05da-505e-4ad3-8aba-596235eba06c-kube-api-access-bhr26\") pod \"ceilometer-0\" (UID: \"4b1a05da-505e-4ad3-8aba-596235eba06c\") " pod="openstack/ceilometer-0" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.716155 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4efe190c-047a-4463-9044-515816c2a7e1" path="/var/lib/kubelet/pods/4efe190c-047a-4463-9044-515816c2a7e1/volumes" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.727657 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549d55ddbc-cfmfx" event={"ID":"2336abfe-2191-4b5f-92bd-2077f6051a52","Type":"ContainerDied","Data":"e356745ca61bbc6db9c0e312560655ef1ecbaa123a0d96b37987d4a3c5aa44c3"} Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.724335 4520 generic.go:334] "Generic (PLEG): container finished" podID="2336abfe-2191-4b5f-92bd-2077f6051a52" containerID="e356745ca61bbc6db9c0e312560655ef1ecbaa123a0d96b37987d4a3c5aa44c3" exitCode=0 Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.724241 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.751105 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549d55ddbc-cfmfx" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.751603 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d85f5b788-9fjcm" event={"ID":"e439f3dd-60bf-4740-b282-05179f982029","Type":"ContainerStarted","Data":"79d7f67a3c854eab70f4183816b9697e3539cba5999c3563ba28819298b41fef"} Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.752326 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7d85f5b788-9fjcm" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.752359 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7d85f5b788-9fjcm" Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.766693 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"34ccd84a-cc7c-4722-9873-dc7d2c816c0d","Type":"ContainerStarted","Data":"4234ee20f31d640d182e14440ed17a576cc671a1d4e9809ef6c0d22c06ee70e1"} Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.942441 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plvxr\" (UniqueName: \"kubernetes.io/projected/2336abfe-2191-4b5f-92bd-2077f6051a52-kube-api-access-plvxr\") pod \"2336abfe-2191-4b5f-92bd-2077f6051a52\" (UID: \"2336abfe-2191-4b5f-92bd-2077f6051a52\") " Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.942795 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2336abfe-2191-4b5f-92bd-2077f6051a52-dns-swift-storage-0\") pod \"2336abfe-2191-4b5f-92bd-2077f6051a52\" (UID: \"2336abfe-2191-4b5f-92bd-2077f6051a52\") " Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.942879 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2336abfe-2191-4b5f-92bd-2077f6051a52-ovsdbserver-sb\") pod \"2336abfe-2191-4b5f-92bd-2077f6051a52\" (UID: \"2336abfe-2191-4b5f-92bd-2077f6051a52\") " Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.943062 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2336abfe-2191-4b5f-92bd-2077f6051a52-config\") pod \"2336abfe-2191-4b5f-92bd-2077f6051a52\" (UID: \"2336abfe-2191-4b5f-92bd-2077f6051a52\") " Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.943080 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2336abfe-2191-4b5f-92bd-2077f6051a52-dns-svc\") pod \"2336abfe-2191-4b5f-92bd-2077f6051a52\" (UID: \"2336abfe-2191-4b5f-92bd-2077f6051a52\") " Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.943217 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2336abfe-2191-4b5f-92bd-2077f6051a52-ovsdbserver-nb\") pod \"2336abfe-2191-4b5f-92bd-2077f6051a52\" (UID: \"2336abfe-2191-4b5f-92bd-2077f6051a52\") " Jan 30 07:01:18 crc kubenswrapper[4520]: I0130 07:01:18.958302 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2336abfe-2191-4b5f-92bd-2077f6051a52-kube-api-access-plvxr" (OuterVolumeSpecName: "kube-api-access-plvxr") pod "2336abfe-2191-4b5f-92bd-2077f6051a52" (UID: "2336abfe-2191-4b5f-92bd-2077f6051a52"). InnerVolumeSpecName "kube-api-access-plvxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:01:19 crc kubenswrapper[4520]: I0130 07:01:19.060086 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plvxr\" (UniqueName: \"kubernetes.io/projected/2336abfe-2191-4b5f-92bd-2077f6051a52-kube-api-access-plvxr\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:19 crc kubenswrapper[4520]: I0130 07:01:19.082249 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2336abfe-2191-4b5f-92bd-2077f6051a52-config" (OuterVolumeSpecName: "config") pod "2336abfe-2191-4b5f-92bd-2077f6051a52" (UID: "2336abfe-2191-4b5f-92bd-2077f6051a52"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:01:19 crc kubenswrapper[4520]: I0130 07:01:19.146099 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2336abfe-2191-4b5f-92bd-2077f6051a52-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2336abfe-2191-4b5f-92bd-2077f6051a52" (UID: "2336abfe-2191-4b5f-92bd-2077f6051a52"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:01:19 crc kubenswrapper[4520]: I0130 07:01:19.162463 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2336abfe-2191-4b5f-92bd-2077f6051a52-config\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:19 crc kubenswrapper[4520]: I0130 07:01:19.162494 4520 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2336abfe-2191-4b5f-92bd-2077f6051a52-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:19 crc kubenswrapper[4520]: I0130 07:01:19.163163 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2336abfe-2191-4b5f-92bd-2077f6051a52-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2336abfe-2191-4b5f-92bd-2077f6051a52" (UID: "2336abfe-2191-4b5f-92bd-2077f6051a52"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:01:19 crc kubenswrapper[4520]: I0130 07:01:19.163586 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2336abfe-2191-4b5f-92bd-2077f6051a52-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2336abfe-2191-4b5f-92bd-2077f6051a52" (UID: "2336abfe-2191-4b5f-92bd-2077f6051a52"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:01:19 crc kubenswrapper[4520]: I0130 07:01:19.164243 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2336abfe-2191-4b5f-92bd-2077f6051a52-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2336abfe-2191-4b5f-92bd-2077f6051a52" (UID: "2336abfe-2191-4b5f-92bd-2077f6051a52"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:01:19 crc kubenswrapper[4520]: I0130 07:01:19.264072 4520 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2336abfe-2191-4b5f-92bd-2077f6051a52-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:19 crc kubenswrapper[4520]: I0130 07:01:19.264329 4520 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2336abfe-2191-4b5f-92bd-2077f6051a52-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:19 crc kubenswrapper[4520]: I0130 07:01:19.264342 4520 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2336abfe-2191-4b5f-92bd-2077f6051a52-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:19 crc kubenswrapper[4520]: I0130 07:01:19.476464 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7d85f5b788-9fjcm" podStartSLOduration=3.476434668 podStartE2EDuration="3.476434668s" podCreationTimestamp="2026-01-30 07:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:01:18.805821651 +0000 UTC m=+992.434173831" watchObservedRunningTime="2026-01-30 07:01:19.476434668 +0000 UTC m=+993.104786849" Jan 30 07:01:19 crc kubenswrapper[4520]: W0130 07:01:19.481755 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b1a05da_505e_4ad3_8aba_596235eba06c.slice/crio-bcbccd5f55858e991718eb20b367739f6b1dbf065e5dd2841123d25206bc437b WatchSource:0}: Error finding container bcbccd5f55858e991718eb20b367739f6b1dbf065e5dd2841123d25206bc437b: Status 404 returned error can't find the container with id bcbccd5f55858e991718eb20b367739f6b1dbf065e5dd2841123d25206bc437b Jan 30 07:01:19 crc kubenswrapper[4520]: I0130 07:01:19.482063 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:01:19 crc kubenswrapper[4520]: I0130 07:01:19.777023 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"34ccd84a-cc7c-4722-9873-dc7d2c816c0d","Type":"ContainerStarted","Data":"65c74b831d5d97eec14be377db5ca8995d4703674416c74c8b472568f60b84f8"} Jan 30 07:01:19 crc kubenswrapper[4520]: I0130 07:01:19.777152 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 07:01:19 crc kubenswrapper[4520]: I0130 07:01:19.778539 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b1a05da-505e-4ad3-8aba-596235eba06c","Type":"ContainerStarted","Data":"bcbccd5f55858e991718eb20b367739f6b1dbf065e5dd2841123d25206bc437b"} Jan 30 07:01:19 crc kubenswrapper[4520]: I0130 07:01:19.780414 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549d55ddbc-cfmfx" event={"ID":"2336abfe-2191-4b5f-92bd-2077f6051a52","Type":"ContainerDied","Data":"2b94d9e5799981260df75d82cc68717c8311598496fc3290ff16cf0fd3541852"} Jan 30 07:01:19 crc kubenswrapper[4520]: I0130 07:01:19.780464 4520 scope.go:117] "RemoveContainer" containerID="e356745ca61bbc6db9c0e312560655ef1ecbaa123a0d96b37987d4a3c5aa44c3" Jan 30 07:01:19 crc kubenswrapper[4520]: I0130 07:01:19.780480 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549d55ddbc-cfmfx" Jan 30 07:01:19 crc kubenswrapper[4520]: I0130 07:01:19.807182 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.807166937 podStartE2EDuration="3.807166937s" podCreationTimestamp="2026-01-30 07:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:01:19.802929972 +0000 UTC m=+993.431282153" watchObservedRunningTime="2026-01-30 07:01:19.807166937 +0000 UTC m=+993.435519119" Jan 30 07:01:19 crc kubenswrapper[4520]: I0130 07:01:19.813829 4520 scope.go:117] "RemoveContainer" containerID="fae679a885187e9e7526d2a5cdddf61022fc4fb70619c2be743b77b3ebecdc17" Jan 30 07:01:19 crc kubenswrapper[4520]: I0130 07:01:19.845271 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-549d55ddbc-cfmfx"] Jan 30 07:01:19 crc kubenswrapper[4520]: I0130 07:01:19.849456 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-549d55ddbc-cfmfx"] Jan 30 07:01:20 crc kubenswrapper[4520]: I0130 07:01:20.007677 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-56c9b4b8d6-x299t" Jan 30 07:01:20 crc kubenswrapper[4520]: I0130 07:01:20.019109 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-56c9b4b8d6-x299t" Jan 30 07:01:20 crc kubenswrapper[4520]: I0130 07:01:20.151467 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-547b6f779b-dz8tp"] Jan 30 07:01:20 crc kubenswrapper[4520]: I0130 07:01:20.151798 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-547b6f779b-dz8tp" podUID="8c478fd5-c0c9-4959-8b1f-69b89aa24932" containerName="barbican-api-log" containerID="cri-o://1281963ebd91f55ec78c917f7564f3630c56884448a25ec9f50e96dbd8a292c5" gracePeriod=30 Jan 30 07:01:20 crc kubenswrapper[4520]: I0130 07:01:20.152365 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-547b6f779b-dz8tp" podUID="8c478fd5-c0c9-4959-8b1f-69b89aa24932" containerName="barbican-api" containerID="cri-o://9abf0917b7cbc5c42092d54a6b476db185df50cc70c046577c3acc101542d581" gracePeriod=30 Jan 30 07:01:20 crc kubenswrapper[4520]: E0130 07:01:20.397180 4520 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c478fd5_c0c9_4959_8b1f_69b89aa24932.slice/crio-1281963ebd91f55ec78c917f7564f3630c56884448a25ec9f50e96dbd8a292c5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c478fd5_c0c9_4959_8b1f_69b89aa24932.slice/crio-conmon-1281963ebd91f55ec78c917f7564f3630c56884448a25ec9f50e96dbd8a292c5.scope\": RecentStats: unable to find data in memory cache]" Jan 30 07:01:20 crc kubenswrapper[4520]: I0130 07:01:20.697011 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2336abfe-2191-4b5f-92bd-2077f6051a52" path="/var/lib/kubelet/pods/2336abfe-2191-4b5f-92bd-2077f6051a52/volumes" Jan 30 07:01:20 crc kubenswrapper[4520]: I0130 07:01:20.714219 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-d9dd85bbd-2g75n" Jan 30 07:01:20 crc kubenswrapper[4520]: I0130 07:01:20.794660 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-c459697cb-g922m"] Jan 30 07:01:20 crc kubenswrapper[4520]: I0130 07:01:20.794911 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-c459697cb-g922m" podUID="3380703e-5659-4040-8b43-e3ada0eaa6b6" containerName="horizon-log" containerID="cri-o://2b747fc744b96278e67ea47a8f4cfb4393466c3789a5b3eca465bed0bea2d640" gracePeriod=30 Jan 30 07:01:20 crc kubenswrapper[4520]: I0130 07:01:20.794962 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-c459697cb-g922m" podUID="3380703e-5659-4040-8b43-e3ada0eaa6b6" containerName="horizon" containerID="cri-o://d03bf2e75cec449c2d1120c53868d2b6ad99cf296b31eb75a042471f6bea2caa" gracePeriod=30 Jan 30 07:01:20 crc kubenswrapper[4520]: I0130 07:01:20.807998 4520 generic.go:334] "Generic (PLEG): container finished" podID="8c478fd5-c0c9-4959-8b1f-69b89aa24932" containerID="1281963ebd91f55ec78c917f7564f3630c56884448a25ec9f50e96dbd8a292c5" exitCode=143 Jan 30 07:01:20 crc kubenswrapper[4520]: I0130 07:01:20.808162 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-547b6f779b-dz8tp" event={"ID":"8c478fd5-c0c9-4959-8b1f-69b89aa24932","Type":"ContainerDied","Data":"1281963ebd91f55ec78c917f7564f3630c56884448a25ec9f50e96dbd8a292c5"} Jan 30 07:01:20 crc kubenswrapper[4520]: I0130 07:01:20.821018 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b1a05da-505e-4ad3-8aba-596235eba06c","Type":"ContainerStarted","Data":"22745bd991aa16dc658bda284f170f8341b43f623a562ff5c3a49e31c372ad4a"} Jan 30 07:01:20 crc kubenswrapper[4520]: I0130 07:01:20.822245 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-c459697cb-g922m" podUID="3380703e-5659-4040-8b43-e3ada0eaa6b6" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Jan 30 07:01:21 crc kubenswrapper[4520]: I0130 07:01:21.844932 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b1a05da-505e-4ad3-8aba-596235eba06c","Type":"ContainerStarted","Data":"478c1e8e5bfe174ffbe6a8456374f8d97d5a9f6ab2106e33fd9d45af1c2c134f"} Jan 30 07:01:22 crc kubenswrapper[4520]: I0130 07:01:22.854846 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b1a05da-505e-4ad3-8aba-596235eba06c","Type":"ContainerStarted","Data":"6c7a741445971bf34cb83b6e6992059398c897d50c82940a95b1604b01e63d14"} Jan 30 07:01:23 crc kubenswrapper[4520]: I0130 07:01:23.005094 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-59d84c9dc8-9scqq" Jan 30 07:01:23 crc kubenswrapper[4520]: I0130 07:01:23.302005 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 07:01:23 crc kubenswrapper[4520]: I0130 07:01:23.382557 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 07:01:23 crc kubenswrapper[4520]: I0130 07:01:23.878333 4520 generic.go:334] "Generic (PLEG): container finished" podID="8c478fd5-c0c9-4959-8b1f-69b89aa24932" containerID="9abf0917b7cbc5c42092d54a6b476db185df50cc70c046577c3acc101542d581" exitCode=0 Jan 30 07:01:23 crc kubenswrapper[4520]: I0130 07:01:23.878718 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ede99291-73df-453d-80f2-3e4744245bb4" containerName="cinder-scheduler" containerID="cri-o://55bd13feafcb79075c8b0e65a1bc19bb0d437eb1d5651bf56a25fe7841bf6766" gracePeriod=30 Jan 30 07:01:23 crc kubenswrapper[4520]: I0130 07:01:23.879674 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-547b6f779b-dz8tp" event={"ID":"8c478fd5-c0c9-4959-8b1f-69b89aa24932","Type":"ContainerDied","Data":"9abf0917b7cbc5c42092d54a6b476db185df50cc70c046577c3acc101542d581"} Jan 30 07:01:23 crc kubenswrapper[4520]: I0130 07:01:23.879685 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ede99291-73df-453d-80f2-3e4744245bb4" containerName="probe" containerID="cri-o://7a4a8a1160e596dac5a9d742c3a8bebd6a3092b3edfeb4f5733142143c35f442" gracePeriod=30 Jan 30 07:01:23 crc kubenswrapper[4520]: I0130 07:01:23.968377 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-c459697cb-g922m" podUID="3380703e-5659-4040-8b43-e3ada0eaa6b6" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:60164->10.217.0.150:8443: read: connection reset by peer" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.032561 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 07:01:24 crc kubenswrapper[4520]: E0130 07:01:24.033202 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2336abfe-2191-4b5f-92bd-2077f6051a52" containerName="init" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.033215 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="2336abfe-2191-4b5f-92bd-2077f6051a52" containerName="init" Jan 30 07:01:24 crc kubenswrapper[4520]: E0130 07:01:24.033245 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2336abfe-2191-4b5f-92bd-2077f6051a52" containerName="dnsmasq-dns" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.033251 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="2336abfe-2191-4b5f-92bd-2077f6051a52" containerName="dnsmasq-dns" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.033421 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="2336abfe-2191-4b5f-92bd-2077f6051a52" containerName="dnsmasq-dns" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.034110 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.041703 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.042390 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.042534 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-xrz2s" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.062314 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.069133 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-547b6f779b-dz8tp" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.169651 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-c459697cb-g922m" podUID="3380703e-5659-4040-8b43-e3ada0eaa6b6" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.226953 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c478fd5-c0c9-4959-8b1f-69b89aa24932-combined-ca-bundle\") pod \"8c478fd5-c0c9-4959-8b1f-69b89aa24932\" (UID: \"8c478fd5-c0c9-4959-8b1f-69b89aa24932\") " Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.227278 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c478fd5-c0c9-4959-8b1f-69b89aa24932-logs\") pod \"8c478fd5-c0c9-4959-8b1f-69b89aa24932\" (UID: \"8c478fd5-c0c9-4959-8b1f-69b89aa24932\") " Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.227739 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvskx\" (UniqueName: \"kubernetes.io/projected/8c478fd5-c0c9-4959-8b1f-69b89aa24932-kube-api-access-hvskx\") pod \"8c478fd5-c0c9-4959-8b1f-69b89aa24932\" (UID: \"8c478fd5-c0c9-4959-8b1f-69b89aa24932\") " Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.227948 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c478fd5-c0c9-4959-8b1f-69b89aa24932-logs" (OuterVolumeSpecName: "logs") pod "8c478fd5-c0c9-4959-8b1f-69b89aa24932" (UID: "8c478fd5-c0c9-4959-8b1f-69b89aa24932"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.228777 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c478fd5-c0c9-4959-8b1f-69b89aa24932-config-data-custom\") pod \"8c478fd5-c0c9-4959-8b1f-69b89aa24932\" (UID: \"8c478fd5-c0c9-4959-8b1f-69b89aa24932\") " Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.228869 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c478fd5-c0c9-4959-8b1f-69b89aa24932-config-data\") pod \"8c478fd5-c0c9-4959-8b1f-69b89aa24932\" (UID: \"8c478fd5-c0c9-4959-8b1f-69b89aa24932\") " Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.229807 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/adfdf2da-e6a3-4282-accf-c847645aa0fc-openstack-config-secret\") pod \"openstackclient\" (UID: \"adfdf2da-e6a3-4282-accf-c847645aa0fc\") " pod="openstack/openstackclient" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.230021 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvbnm\" (UniqueName: \"kubernetes.io/projected/adfdf2da-e6a3-4282-accf-c847645aa0fc-kube-api-access-tvbnm\") pod \"openstackclient\" (UID: \"adfdf2da-e6a3-4282-accf-c847645aa0fc\") " pod="openstack/openstackclient" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.230145 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/adfdf2da-e6a3-4282-accf-c847645aa0fc-openstack-config\") pod \"openstackclient\" (UID: \"adfdf2da-e6a3-4282-accf-c847645aa0fc\") " pod="openstack/openstackclient" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.230342 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adfdf2da-e6a3-4282-accf-c847645aa0fc-combined-ca-bundle\") pod \"openstackclient\" (UID: \"adfdf2da-e6a3-4282-accf-c847645aa0fc\") " pod="openstack/openstackclient" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.230493 4520 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c478fd5-c0c9-4959-8b1f-69b89aa24932-logs\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.235424 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c478fd5-c0c9-4959-8b1f-69b89aa24932-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8c478fd5-c0c9-4959-8b1f-69b89aa24932" (UID: "8c478fd5-c0c9-4959-8b1f-69b89aa24932"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.243470 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c478fd5-c0c9-4959-8b1f-69b89aa24932-kube-api-access-hvskx" (OuterVolumeSpecName: "kube-api-access-hvskx") pod "8c478fd5-c0c9-4959-8b1f-69b89aa24932" (UID: "8c478fd5-c0c9-4959-8b1f-69b89aa24932"). InnerVolumeSpecName "kube-api-access-hvskx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.266180 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c478fd5-c0c9-4959-8b1f-69b89aa24932-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c478fd5-c0c9-4959-8b1f-69b89aa24932" (UID: "8c478fd5-c0c9-4959-8b1f-69b89aa24932"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.292566 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c478fd5-c0c9-4959-8b1f-69b89aa24932-config-data" (OuterVolumeSpecName: "config-data") pod "8c478fd5-c0c9-4959-8b1f-69b89aa24932" (UID: "8c478fd5-c0c9-4959-8b1f-69b89aa24932"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.333443 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/adfdf2da-e6a3-4282-accf-c847645aa0fc-openstack-config-secret\") pod \"openstackclient\" (UID: \"adfdf2da-e6a3-4282-accf-c847645aa0fc\") " pod="openstack/openstackclient" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.334147 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvbnm\" (UniqueName: \"kubernetes.io/projected/adfdf2da-e6a3-4282-accf-c847645aa0fc-kube-api-access-tvbnm\") pod \"openstackclient\" (UID: \"adfdf2da-e6a3-4282-accf-c847645aa0fc\") " pod="openstack/openstackclient" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.334260 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/adfdf2da-e6a3-4282-accf-c847645aa0fc-openstack-config\") pod \"openstackclient\" (UID: \"adfdf2da-e6a3-4282-accf-c847645aa0fc\") " pod="openstack/openstackclient" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.334425 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adfdf2da-e6a3-4282-accf-c847645aa0fc-combined-ca-bundle\") pod \"openstackclient\" (UID: \"adfdf2da-e6a3-4282-accf-c847645aa0fc\") " pod="openstack/openstackclient" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.334593 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c478fd5-c0c9-4959-8b1f-69b89aa24932-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.335256 4520 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c478fd5-c0c9-4959-8b1f-69b89aa24932-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.335327 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c478fd5-c0c9-4959-8b1f-69b89aa24932-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.335381 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvskx\" (UniqueName: \"kubernetes.io/projected/8c478fd5-c0c9-4959-8b1f-69b89aa24932-kube-api-access-hvskx\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.335681 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/adfdf2da-e6a3-4282-accf-c847645aa0fc-openstack-config\") pod \"openstackclient\" (UID: \"adfdf2da-e6a3-4282-accf-c847645aa0fc\") " pod="openstack/openstackclient" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.342175 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adfdf2da-e6a3-4282-accf-c847645aa0fc-combined-ca-bundle\") pod \"openstackclient\" (UID: \"adfdf2da-e6a3-4282-accf-c847645aa0fc\") " pod="openstack/openstackclient" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.343368 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/adfdf2da-e6a3-4282-accf-c847645aa0fc-openstack-config-secret\") pod \"openstackclient\" (UID: \"adfdf2da-e6a3-4282-accf-c847645aa0fc\") " pod="openstack/openstackclient" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.350480 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvbnm\" (UniqueName: \"kubernetes.io/projected/adfdf2da-e6a3-4282-accf-c847645aa0fc-kube-api-access-tvbnm\") pod \"openstackclient\" (UID: \"adfdf2da-e6a3-4282-accf-c847645aa0fc\") " pod="openstack/openstackclient" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.377398 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.861928 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 07:01:24 crc kubenswrapper[4520]: W0130 07:01:24.883615 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podadfdf2da_e6a3_4282_accf_c847645aa0fc.slice/crio-f80a9be73baaf13e87e863f5b9a28f4dbfc6ccb48f7915b1faa9d3af2bb67c7c WatchSource:0}: Error finding container f80a9be73baaf13e87e863f5b9a28f4dbfc6ccb48f7915b1faa9d3af2bb67c7c: Status 404 returned error can't find the container with id f80a9be73baaf13e87e863f5b9a28f4dbfc6ccb48f7915b1faa9d3af2bb67c7c Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.891908 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-547b6f779b-dz8tp" event={"ID":"8c478fd5-c0c9-4959-8b1f-69b89aa24932","Type":"ContainerDied","Data":"e326497d93780513dd2d1f5150ade3b1429c7489ef31b50c0a4003263b113f0f"} Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.891986 4520 scope.go:117] "RemoveContainer" containerID="9abf0917b7cbc5c42092d54a6b476db185df50cc70c046577c3acc101542d581" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.892134 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-547b6f779b-dz8tp" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.901153 4520 generic.go:334] "Generic (PLEG): container finished" podID="3380703e-5659-4040-8b43-e3ada0eaa6b6" containerID="d03bf2e75cec449c2d1120c53868d2b6ad99cf296b31eb75a042471f6bea2caa" exitCode=0 Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.901221 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c459697cb-g922m" event={"ID":"3380703e-5659-4040-8b43-e3ada0eaa6b6","Type":"ContainerDied","Data":"d03bf2e75cec449c2d1120c53868d2b6ad99cf296b31eb75a042471f6bea2caa"} Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.913074 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b1a05da-505e-4ad3-8aba-596235eba06c","Type":"ContainerStarted","Data":"0fe1e97f4ca7fc31f5df6ca0c088afa106853f66614658702622686d0ec052a8"} Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.914534 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.936190 4520 scope.go:117] "RemoveContainer" containerID="1281963ebd91f55ec78c917f7564f3630c56884448a25ec9f50e96dbd8a292c5" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.937898 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.1262972749999998 podStartE2EDuration="6.937858615s" podCreationTimestamp="2026-01-30 07:01:18 +0000 UTC" firstStartedPulling="2026-01-30 07:01:19.484379244 +0000 UTC m=+993.112731426" lastFinishedPulling="2026-01-30 07:01:24.295940585 +0000 UTC m=+997.924292766" observedRunningTime="2026-01-30 07:01:24.935064822 +0000 UTC m=+998.563417003" watchObservedRunningTime="2026-01-30 07:01:24.937858615 +0000 UTC m=+998.566210796" Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.973767 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-547b6f779b-dz8tp"] Jan 30 07:01:24 crc kubenswrapper[4520]: I0130 07:01:24.979311 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-547b6f779b-dz8tp"] Jan 30 07:01:25 crc kubenswrapper[4520]: I0130 07:01:25.927243 4520 generic.go:334] "Generic (PLEG): container finished" podID="ede99291-73df-453d-80f2-3e4744245bb4" containerID="7a4a8a1160e596dac5a9d742c3a8bebd6a3092b3edfeb4f5733142143c35f442" exitCode=0 Jan 30 07:01:25 crc kubenswrapper[4520]: I0130 07:01:25.927324 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ede99291-73df-453d-80f2-3e4744245bb4","Type":"ContainerDied","Data":"7a4a8a1160e596dac5a9d742c3a8bebd6a3092b3edfeb4f5733142143c35f442"} Jan 30 07:01:25 crc kubenswrapper[4520]: I0130 07:01:25.928565 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"adfdf2da-e6a3-4282-accf-c847645aa0fc","Type":"ContainerStarted","Data":"f80a9be73baaf13e87e863f5b9a28f4dbfc6ccb48f7915b1faa9d3af2bb67c7c"} Jan 30 07:01:26 crc kubenswrapper[4520]: I0130 07:01:26.737751 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 07:01:26 crc kubenswrapper[4520]: I0130 07:01:26.758505 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c478fd5-c0c9-4959-8b1f-69b89aa24932" path="/var/lib/kubelet/pods/8c478fd5-c0c9-4959-8b1f-69b89aa24932/volumes" Jan 30 07:01:26 crc kubenswrapper[4520]: I0130 07:01:26.825862 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtrft\" (UniqueName: \"kubernetes.io/projected/ede99291-73df-453d-80f2-3e4744245bb4-kube-api-access-jtrft\") pod \"ede99291-73df-453d-80f2-3e4744245bb4\" (UID: \"ede99291-73df-453d-80f2-3e4744245bb4\") " Jan 30 07:01:26 crc kubenswrapper[4520]: I0130 07:01:26.825994 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ede99291-73df-453d-80f2-3e4744245bb4-combined-ca-bundle\") pod \"ede99291-73df-453d-80f2-3e4744245bb4\" (UID: \"ede99291-73df-453d-80f2-3e4744245bb4\") " Jan 30 07:01:26 crc kubenswrapper[4520]: I0130 07:01:26.826185 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ede99291-73df-453d-80f2-3e4744245bb4-etc-machine-id\") pod \"ede99291-73df-453d-80f2-3e4744245bb4\" (UID: \"ede99291-73df-453d-80f2-3e4744245bb4\") " Jan 30 07:01:26 crc kubenswrapper[4520]: I0130 07:01:26.826341 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ede99291-73df-453d-80f2-3e4744245bb4-config-data\") pod \"ede99291-73df-453d-80f2-3e4744245bb4\" (UID: \"ede99291-73df-453d-80f2-3e4744245bb4\") " Jan 30 07:01:26 crc kubenswrapper[4520]: I0130 07:01:26.826412 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ede99291-73df-453d-80f2-3e4744245bb4-config-data-custom\") pod \"ede99291-73df-453d-80f2-3e4744245bb4\" (UID: \"ede99291-73df-453d-80f2-3e4744245bb4\") " Jan 30 07:01:26 crc kubenswrapper[4520]: I0130 07:01:26.826735 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ede99291-73df-453d-80f2-3e4744245bb4-scripts\") pod \"ede99291-73df-453d-80f2-3e4744245bb4\" (UID: \"ede99291-73df-453d-80f2-3e4744245bb4\") " Jan 30 07:01:26 crc kubenswrapper[4520]: I0130 07:01:26.826869 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ede99291-73df-453d-80f2-3e4744245bb4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ede99291-73df-453d-80f2-3e4744245bb4" (UID: "ede99291-73df-453d-80f2-3e4744245bb4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 07:01:26 crc kubenswrapper[4520]: I0130 07:01:26.831729 4520 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ede99291-73df-453d-80f2-3e4744245bb4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:26 crc kubenswrapper[4520]: I0130 07:01:26.844751 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ede99291-73df-453d-80f2-3e4744245bb4-kube-api-access-jtrft" (OuterVolumeSpecName: "kube-api-access-jtrft") pod "ede99291-73df-453d-80f2-3e4744245bb4" (UID: "ede99291-73df-453d-80f2-3e4744245bb4"). InnerVolumeSpecName "kube-api-access-jtrft". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:01:26 crc kubenswrapper[4520]: I0130 07:01:26.849793 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ede99291-73df-453d-80f2-3e4744245bb4-scripts" (OuterVolumeSpecName: "scripts") pod "ede99291-73df-453d-80f2-3e4744245bb4" (UID: "ede99291-73df-453d-80f2-3e4744245bb4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:26 crc kubenswrapper[4520]: I0130 07:01:26.873679 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ede99291-73df-453d-80f2-3e4744245bb4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ede99291-73df-453d-80f2-3e4744245bb4" (UID: "ede99291-73df-453d-80f2-3e4744245bb4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:26 crc kubenswrapper[4520]: I0130 07:01:26.937407 4520 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ede99291-73df-453d-80f2-3e4744245bb4-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:26 crc kubenswrapper[4520]: I0130 07:01:26.937443 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtrft\" (UniqueName: \"kubernetes.io/projected/ede99291-73df-453d-80f2-3e4744245bb4-kube-api-access-jtrft\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:26 crc kubenswrapper[4520]: I0130 07:01:26.937459 4520 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ede99291-73df-453d-80f2-3e4744245bb4-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:26 crc kubenswrapper[4520]: I0130 07:01:26.971744 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ede99291-73df-453d-80f2-3e4744245bb4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ede99291-73df-453d-80f2-3e4744245bb4" (UID: "ede99291-73df-453d-80f2-3e4744245bb4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:26 crc kubenswrapper[4520]: I0130 07:01:26.974577 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ede99291-73df-453d-80f2-3e4744245bb4-config-data" (OuterVolumeSpecName: "config-data") pod "ede99291-73df-453d-80f2-3e4744245bb4" (UID: "ede99291-73df-453d-80f2-3e4744245bb4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:26 crc kubenswrapper[4520]: I0130 07:01:26.974966 4520 generic.go:334] "Generic (PLEG): container finished" podID="ede99291-73df-453d-80f2-3e4744245bb4" containerID="55bd13feafcb79075c8b0e65a1bc19bb0d437eb1d5651bf56a25fe7841bf6766" exitCode=0 Jan 30 07:01:26 crc kubenswrapper[4520]: I0130 07:01:26.975754 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 07:01:26 crc kubenswrapper[4520]: I0130 07:01:26.976321 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ede99291-73df-453d-80f2-3e4744245bb4","Type":"ContainerDied","Data":"55bd13feafcb79075c8b0e65a1bc19bb0d437eb1d5651bf56a25fe7841bf6766"} Jan 30 07:01:26 crc kubenswrapper[4520]: I0130 07:01:26.976367 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ede99291-73df-453d-80f2-3e4744245bb4","Type":"ContainerDied","Data":"4457fe1b790d671fa38cbbc60033f451df981f81220cb9508553e9764c082fa1"} Jan 30 07:01:26 crc kubenswrapper[4520]: I0130 07:01:26.976391 4520 scope.go:117] "RemoveContainer" containerID="7a4a8a1160e596dac5a9d742c3a8bebd6a3092b3edfeb4f5733142143c35f442" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.038872 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ede99291-73df-453d-80f2-3e4744245bb4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.038896 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ede99291-73df-453d-80f2-3e4744245bb4-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.057004 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.085804 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.086028 4520 scope.go:117] "RemoveContainer" containerID="55bd13feafcb79075c8b0e65a1bc19bb0d437eb1d5651bf56a25fe7841bf6766" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.095615 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 07:01:27 crc kubenswrapper[4520]: E0130 07:01:27.095980 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ede99291-73df-453d-80f2-3e4744245bb4" containerName="cinder-scheduler" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.095999 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="ede99291-73df-453d-80f2-3e4744245bb4" containerName="cinder-scheduler" Jan 30 07:01:27 crc kubenswrapper[4520]: E0130 07:01:27.096011 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c478fd5-c0c9-4959-8b1f-69b89aa24932" containerName="barbican-api-log" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.096018 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c478fd5-c0c9-4959-8b1f-69b89aa24932" containerName="barbican-api-log" Jan 30 07:01:27 crc kubenswrapper[4520]: E0130 07:01:27.096033 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ede99291-73df-453d-80f2-3e4744245bb4" containerName="probe" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.096039 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="ede99291-73df-453d-80f2-3e4744245bb4" containerName="probe" Jan 30 07:01:27 crc kubenswrapper[4520]: E0130 07:01:27.096069 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c478fd5-c0c9-4959-8b1f-69b89aa24932" containerName="barbican-api" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.096074 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c478fd5-c0c9-4959-8b1f-69b89aa24932" containerName="barbican-api" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.096215 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="ede99291-73df-453d-80f2-3e4744245bb4" containerName="probe" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.096228 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="ede99291-73df-453d-80f2-3e4744245bb4" containerName="cinder-scheduler" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.096247 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c478fd5-c0c9-4959-8b1f-69b89aa24932" containerName="barbican-api" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.096255 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c478fd5-c0c9-4959-8b1f-69b89aa24932" containerName="barbican-api-log" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.097187 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.101640 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.113502 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.135342 4520 scope.go:117] "RemoveContainer" containerID="7a4a8a1160e596dac5a9d742c3a8bebd6a3092b3edfeb4f5733142143c35f442" Jan 30 07:01:27 crc kubenswrapper[4520]: E0130 07:01:27.136094 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a4a8a1160e596dac5a9d742c3a8bebd6a3092b3edfeb4f5733142143c35f442\": container with ID starting with 7a4a8a1160e596dac5a9d742c3a8bebd6a3092b3edfeb4f5733142143c35f442 not found: ID does not exist" containerID="7a4a8a1160e596dac5a9d742c3a8bebd6a3092b3edfeb4f5733142143c35f442" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.136128 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a4a8a1160e596dac5a9d742c3a8bebd6a3092b3edfeb4f5733142143c35f442"} err="failed to get container status \"7a4a8a1160e596dac5a9d742c3a8bebd6a3092b3edfeb4f5733142143c35f442\": rpc error: code = NotFound desc = could not find container \"7a4a8a1160e596dac5a9d742c3a8bebd6a3092b3edfeb4f5733142143c35f442\": container with ID starting with 7a4a8a1160e596dac5a9d742c3a8bebd6a3092b3edfeb4f5733142143c35f442 not found: ID does not exist" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.136159 4520 scope.go:117] "RemoveContainer" containerID="55bd13feafcb79075c8b0e65a1bc19bb0d437eb1d5651bf56a25fe7841bf6766" Jan 30 07:01:27 crc kubenswrapper[4520]: E0130 07:01:27.136411 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55bd13feafcb79075c8b0e65a1bc19bb0d437eb1d5651bf56a25fe7841bf6766\": container with ID starting with 55bd13feafcb79075c8b0e65a1bc19bb0d437eb1d5651bf56a25fe7841bf6766 not found: ID does not exist" containerID="55bd13feafcb79075c8b0e65a1bc19bb0d437eb1d5651bf56a25fe7841bf6766" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.136432 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55bd13feafcb79075c8b0e65a1bc19bb0d437eb1d5651bf56a25fe7841bf6766"} err="failed to get container status \"55bd13feafcb79075c8b0e65a1bc19bb0d437eb1d5651bf56a25fe7841bf6766\": rpc error: code = NotFound desc = could not find container \"55bd13feafcb79075c8b0e65a1bc19bb0d437eb1d5651bf56a25fe7841bf6766\": container with ID starting with 55bd13feafcb79075c8b0e65a1bc19bb0d437eb1d5651bf56a25fe7841bf6766 not found: ID does not exist" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.255077 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ab985e5-7e52-4438-9d9b-fd6f2e4f4175-scripts\") pod \"cinder-scheduler-0\" (UID: \"0ab985e5-7e52-4438-9d9b-fd6f2e4f4175\") " pod="openstack/cinder-scheduler-0" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.255161 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0ab985e5-7e52-4438-9d9b-fd6f2e4f4175-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0ab985e5-7e52-4438-9d9b-fd6f2e4f4175\") " pod="openstack/cinder-scheduler-0" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.255227 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab985e5-7e52-4438-9d9b-fd6f2e4f4175-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0ab985e5-7e52-4438-9d9b-fd6f2e4f4175\") " pod="openstack/cinder-scheduler-0" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.255314 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab985e5-7e52-4438-9d9b-fd6f2e4f4175-config-data\") pod \"cinder-scheduler-0\" (UID: \"0ab985e5-7e52-4438-9d9b-fd6f2e4f4175\") " pod="openstack/cinder-scheduler-0" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.255333 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ab985e5-7e52-4438-9d9b-fd6f2e4f4175-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0ab985e5-7e52-4438-9d9b-fd6f2e4f4175\") " pod="openstack/cinder-scheduler-0" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.255353 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79p2d\" (UniqueName: \"kubernetes.io/projected/0ab985e5-7e52-4438-9d9b-fd6f2e4f4175-kube-api-access-79p2d\") pod \"cinder-scheduler-0\" (UID: \"0ab985e5-7e52-4438-9d9b-fd6f2e4f4175\") " pod="openstack/cinder-scheduler-0" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.356964 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0ab985e5-7e52-4438-9d9b-fd6f2e4f4175-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0ab985e5-7e52-4438-9d9b-fd6f2e4f4175\") " pod="openstack/cinder-scheduler-0" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.357046 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab985e5-7e52-4438-9d9b-fd6f2e4f4175-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0ab985e5-7e52-4438-9d9b-fd6f2e4f4175\") " pod="openstack/cinder-scheduler-0" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.357113 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab985e5-7e52-4438-9d9b-fd6f2e4f4175-config-data\") pod \"cinder-scheduler-0\" (UID: \"0ab985e5-7e52-4438-9d9b-fd6f2e4f4175\") " pod="openstack/cinder-scheduler-0" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.357133 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ab985e5-7e52-4438-9d9b-fd6f2e4f4175-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0ab985e5-7e52-4438-9d9b-fd6f2e4f4175\") " pod="openstack/cinder-scheduler-0" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.357156 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79p2d\" (UniqueName: \"kubernetes.io/projected/0ab985e5-7e52-4438-9d9b-fd6f2e4f4175-kube-api-access-79p2d\") pod \"cinder-scheduler-0\" (UID: \"0ab985e5-7e52-4438-9d9b-fd6f2e4f4175\") " pod="openstack/cinder-scheduler-0" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.357217 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ab985e5-7e52-4438-9d9b-fd6f2e4f4175-scripts\") pod \"cinder-scheduler-0\" (UID: \"0ab985e5-7e52-4438-9d9b-fd6f2e4f4175\") " pod="openstack/cinder-scheduler-0" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.361454 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0ab985e5-7e52-4438-9d9b-fd6f2e4f4175-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0ab985e5-7e52-4438-9d9b-fd6f2e4f4175\") " pod="openstack/cinder-scheduler-0" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.361725 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ab985e5-7e52-4438-9d9b-fd6f2e4f4175-scripts\") pod \"cinder-scheduler-0\" (UID: \"0ab985e5-7e52-4438-9d9b-fd6f2e4f4175\") " pod="openstack/cinder-scheduler-0" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.362217 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ab985e5-7e52-4438-9d9b-fd6f2e4f4175-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0ab985e5-7e52-4438-9d9b-fd6f2e4f4175\") " pod="openstack/cinder-scheduler-0" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.369355 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab985e5-7e52-4438-9d9b-fd6f2e4f4175-config-data\") pod \"cinder-scheduler-0\" (UID: \"0ab985e5-7e52-4438-9d9b-fd6f2e4f4175\") " pod="openstack/cinder-scheduler-0" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.383503 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab985e5-7e52-4438-9d9b-fd6f2e4f4175-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0ab985e5-7e52-4438-9d9b-fd6f2e4f4175\") " pod="openstack/cinder-scheduler-0" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.385020 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79p2d\" (UniqueName: \"kubernetes.io/projected/0ab985e5-7e52-4438-9d9b-fd6f2e4f4175-kube-api-access-79p2d\") pod \"cinder-scheduler-0\" (UID: \"0ab985e5-7e52-4438-9d9b-fd6f2e4f4175\") " pod="openstack/cinder-scheduler-0" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.411137 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.762621 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.793367 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 07:01:27 crc kubenswrapper[4520]: I0130 07:01:27.793470 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 07:01:28 crc kubenswrapper[4520]: I0130 07:01:28.012629 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0ab985e5-7e52-4438-9d9b-fd6f2e4f4175","Type":"ContainerStarted","Data":"8131285fc48c28f329f9bf09dfa126a2498db6fe35f7732ff010b892a3890c7e"} Jan 30 07:01:28 crc kubenswrapper[4520]: I0130 07:01:28.392870 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7c56fc575-hzw9q" Jan 30 07:01:28 crc kubenswrapper[4520]: I0130 07:01:28.464947 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7bb59b888-snb5k"] Jan 30 07:01:28 crc kubenswrapper[4520]: I0130 07:01:28.465182 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7bb59b888-snb5k" podUID="2cd8643f-309c-46b4-bc83-4a8548e98403" containerName="neutron-api" containerID="cri-o://1777b6cea22bdbdae82198be0676944390b1d9269a0d4ab14913b1a10002f318" gracePeriod=30 Jan 30 07:01:28 crc kubenswrapper[4520]: I0130 07:01:28.465615 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7bb59b888-snb5k" podUID="2cd8643f-309c-46b4-bc83-4a8548e98403" containerName="neutron-httpd" containerID="cri-o://7ecdddfc4824136d2fcf6c8ee6cc8e4129a06e861c8d6afd9ea9ffb1f08a36f8" gracePeriod=30 Jan 30 07:01:28 crc kubenswrapper[4520]: I0130 07:01:28.732124 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ede99291-73df-453d-80f2-3e4744245bb4" path="/var/lib/kubelet/pods/ede99291-73df-453d-80f2-3e4744245bb4/volumes" Jan 30 07:01:29 crc kubenswrapper[4520]: I0130 07:01:29.035024 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0ab985e5-7e52-4438-9d9b-fd6f2e4f4175","Type":"ContainerStarted","Data":"c4663a9d667d51593ed37c045a3bcba0b11fb3e8d15f7d9d413e2be4a5d9f1e2"} Jan 30 07:01:29 crc kubenswrapper[4520]: I0130 07:01:29.631030 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 30 07:01:30 crc kubenswrapper[4520]: I0130 07:01:30.046099 4520 generic.go:334] "Generic (PLEG): container finished" podID="2cd8643f-309c-46b4-bc83-4a8548e98403" containerID="7ecdddfc4824136d2fcf6c8ee6cc8e4129a06e861c8d6afd9ea9ffb1f08a36f8" exitCode=0 Jan 30 07:01:30 crc kubenswrapper[4520]: I0130 07:01:30.046183 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bb59b888-snb5k" event={"ID":"2cd8643f-309c-46b4-bc83-4a8548e98403","Type":"ContainerDied","Data":"7ecdddfc4824136d2fcf6c8ee6cc8e4129a06e861c8d6afd9ea9ffb1f08a36f8"} Jan 30 07:01:30 crc kubenswrapper[4520]: I0130 07:01:30.050578 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0ab985e5-7e52-4438-9d9b-fd6f2e4f4175","Type":"ContainerStarted","Data":"f13e88cdc6fd4e16b709da326a130755cf2db6dd6d91f1319e773fd9b20f737f"} Jan 30 07:01:30 crc kubenswrapper[4520]: I0130 07:01:30.079831 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.079804551 podStartE2EDuration="3.079804551s" podCreationTimestamp="2026-01-30 07:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:01:30.07814956 +0000 UTC m=+1003.706501741" watchObservedRunningTime="2026-01-30 07:01:30.079804551 +0000 UTC m=+1003.708156732" Jan 30 07:01:30 crc kubenswrapper[4520]: I0130 07:01:30.827178 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7bb59b888-snb5k" Jan 30 07:01:30 crc kubenswrapper[4520]: I0130 07:01:30.848207 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2cd8643f-309c-46b4-bc83-4a8548e98403-config\") pod \"2cd8643f-309c-46b4-bc83-4a8548e98403\" (UID: \"2cd8643f-309c-46b4-bc83-4a8548e98403\") " Jan 30 07:01:30 crc kubenswrapper[4520]: I0130 07:01:30.848304 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cd8643f-309c-46b4-bc83-4a8548e98403-ovndb-tls-certs\") pod \"2cd8643f-309c-46b4-bc83-4a8548e98403\" (UID: \"2cd8643f-309c-46b4-bc83-4a8548e98403\") " Jan 30 07:01:30 crc kubenswrapper[4520]: I0130 07:01:30.848366 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5csg7\" (UniqueName: \"kubernetes.io/projected/2cd8643f-309c-46b4-bc83-4a8548e98403-kube-api-access-5csg7\") pod \"2cd8643f-309c-46b4-bc83-4a8548e98403\" (UID: \"2cd8643f-309c-46b4-bc83-4a8548e98403\") " Jan 30 07:01:30 crc kubenswrapper[4520]: I0130 07:01:30.848404 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cd8643f-309c-46b4-bc83-4a8548e98403-combined-ca-bundle\") pod \"2cd8643f-309c-46b4-bc83-4a8548e98403\" (UID: \"2cd8643f-309c-46b4-bc83-4a8548e98403\") " Jan 30 07:01:30 crc kubenswrapper[4520]: I0130 07:01:30.848437 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2cd8643f-309c-46b4-bc83-4a8548e98403-httpd-config\") pod \"2cd8643f-309c-46b4-bc83-4a8548e98403\" (UID: \"2cd8643f-309c-46b4-bc83-4a8548e98403\") " Jan 30 07:01:30 crc kubenswrapper[4520]: I0130 07:01:30.873691 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cd8643f-309c-46b4-bc83-4a8548e98403-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "2cd8643f-309c-46b4-bc83-4a8548e98403" (UID: "2cd8643f-309c-46b4-bc83-4a8548e98403"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:30 crc kubenswrapper[4520]: I0130 07:01:30.877007 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cd8643f-309c-46b4-bc83-4a8548e98403-kube-api-access-5csg7" (OuterVolumeSpecName: "kube-api-access-5csg7") pod "2cd8643f-309c-46b4-bc83-4a8548e98403" (UID: "2cd8643f-309c-46b4-bc83-4a8548e98403"). InnerVolumeSpecName "kube-api-access-5csg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:01:30 crc kubenswrapper[4520]: I0130 07:01:30.924029 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cd8643f-309c-46b4-bc83-4a8548e98403-config" (OuterVolumeSpecName: "config") pod "2cd8643f-309c-46b4-bc83-4a8548e98403" (UID: "2cd8643f-309c-46b4-bc83-4a8548e98403"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:30 crc kubenswrapper[4520]: I0130 07:01:30.927256 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cd8643f-309c-46b4-bc83-4a8548e98403-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2cd8643f-309c-46b4-bc83-4a8548e98403" (UID: "2cd8643f-309c-46b4-bc83-4a8548e98403"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:30 crc kubenswrapper[4520]: I0130 07:01:30.952821 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2cd8643f-309c-46b4-bc83-4a8548e98403-config\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:30 crc kubenswrapper[4520]: I0130 07:01:30.952848 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5csg7\" (UniqueName: \"kubernetes.io/projected/2cd8643f-309c-46b4-bc83-4a8548e98403-kube-api-access-5csg7\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:30 crc kubenswrapper[4520]: I0130 07:01:30.952861 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cd8643f-309c-46b4-bc83-4a8548e98403-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:30 crc kubenswrapper[4520]: I0130 07:01:30.952870 4520 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2cd8643f-309c-46b4-bc83-4a8548e98403-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:30 crc kubenswrapper[4520]: I0130 07:01:30.982391 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cd8643f-309c-46b4-bc83-4a8548e98403-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "2cd8643f-309c-46b4-bc83-4a8548e98403" (UID: "2cd8643f-309c-46b4-bc83-4a8548e98403"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.053974 4520 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cd8643f-309c-46b4-bc83-4a8548e98403-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.065978 4520 generic.go:334] "Generic (PLEG): container finished" podID="2cd8643f-309c-46b4-bc83-4a8548e98403" containerID="1777b6cea22bdbdae82198be0676944390b1d9269a0d4ab14913b1a10002f318" exitCode=0 Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.066051 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7bb59b888-snb5k" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.066134 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bb59b888-snb5k" event={"ID":"2cd8643f-309c-46b4-bc83-4a8548e98403","Type":"ContainerDied","Data":"1777b6cea22bdbdae82198be0676944390b1d9269a0d4ab14913b1a10002f318"} Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.066171 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bb59b888-snb5k" event={"ID":"2cd8643f-309c-46b4-bc83-4a8548e98403","Type":"ContainerDied","Data":"ef0aebc2a60a38de685158277404e0ffbbc0857912a670dbce26f262284150b1"} Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.066194 4520 scope.go:117] "RemoveContainer" containerID="7ecdddfc4824136d2fcf6c8ee6cc8e4129a06e861c8d6afd9ea9ffb1f08a36f8" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.099841 4520 scope.go:117] "RemoveContainer" containerID="1777b6cea22bdbdae82198be0676944390b1d9269a0d4ab14913b1a10002f318" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.110658 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7bb59b888-snb5k"] Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.118503 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7bb59b888-snb5k"] Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.151800 4520 scope.go:117] "RemoveContainer" containerID="7ecdddfc4824136d2fcf6c8ee6cc8e4129a06e861c8d6afd9ea9ffb1f08a36f8" Jan 30 07:01:31 crc kubenswrapper[4520]: E0130 07:01:31.152797 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ecdddfc4824136d2fcf6c8ee6cc8e4129a06e861c8d6afd9ea9ffb1f08a36f8\": container with ID starting with 7ecdddfc4824136d2fcf6c8ee6cc8e4129a06e861c8d6afd9ea9ffb1f08a36f8 not found: ID does not exist" containerID="7ecdddfc4824136d2fcf6c8ee6cc8e4129a06e861c8d6afd9ea9ffb1f08a36f8" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.152843 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ecdddfc4824136d2fcf6c8ee6cc8e4129a06e861c8d6afd9ea9ffb1f08a36f8"} err="failed to get container status \"7ecdddfc4824136d2fcf6c8ee6cc8e4129a06e861c8d6afd9ea9ffb1f08a36f8\": rpc error: code = NotFound desc = could not find container \"7ecdddfc4824136d2fcf6c8ee6cc8e4129a06e861c8d6afd9ea9ffb1f08a36f8\": container with ID starting with 7ecdddfc4824136d2fcf6c8ee6cc8e4129a06e861c8d6afd9ea9ffb1f08a36f8 not found: ID does not exist" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.152872 4520 scope.go:117] "RemoveContainer" containerID="1777b6cea22bdbdae82198be0676944390b1d9269a0d4ab14913b1a10002f318" Jan 30 07:01:31 crc kubenswrapper[4520]: E0130 07:01:31.165939 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1777b6cea22bdbdae82198be0676944390b1d9269a0d4ab14913b1a10002f318\": container with ID starting with 1777b6cea22bdbdae82198be0676944390b1d9269a0d4ab14913b1a10002f318 not found: ID does not exist" containerID="1777b6cea22bdbdae82198be0676944390b1d9269a0d4ab14913b1a10002f318" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.165968 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1777b6cea22bdbdae82198be0676944390b1d9269a0d4ab14913b1a10002f318"} err="failed to get container status \"1777b6cea22bdbdae82198be0676944390b1d9269a0d4ab14913b1a10002f318\": rpc error: code = NotFound desc = could not find container \"1777b6cea22bdbdae82198be0676944390b1d9269a0d4ab14913b1a10002f318\": container with ID starting with 1777b6cea22bdbdae82198be0676944390b1d9269a0d4ab14913b1a10002f318 not found: ID does not exist" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.171565 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7fc4c495dc-4wmrl"] Jan 30 07:01:31 crc kubenswrapper[4520]: E0130 07:01:31.171965 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cd8643f-309c-46b4-bc83-4a8548e98403" containerName="neutron-httpd" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.171985 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cd8643f-309c-46b4-bc83-4a8548e98403" containerName="neutron-httpd" Jan 30 07:01:31 crc kubenswrapper[4520]: E0130 07:01:31.172009 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cd8643f-309c-46b4-bc83-4a8548e98403" containerName="neutron-api" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.172015 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cd8643f-309c-46b4-bc83-4a8548e98403" containerName="neutron-api" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.172196 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cd8643f-309c-46b4-bc83-4a8548e98403" containerName="neutron-httpd" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.172217 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cd8643f-309c-46b4-bc83-4a8548e98403" containerName="neutron-api" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.173128 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7fc4c495dc-4wmrl" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.184079 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.184331 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.184438 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.236157 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7fc4c495dc-4wmrl"] Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.257771 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17fafdee-9ab2-479b-85e0-96e3ef98daa8-internal-tls-certs\") pod \"swift-proxy-7fc4c495dc-4wmrl\" (UID: \"17fafdee-9ab2-479b-85e0-96e3ef98daa8\") " pod="openstack/swift-proxy-7fc4c495dc-4wmrl" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.257910 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17fafdee-9ab2-479b-85e0-96e3ef98daa8-public-tls-certs\") pod \"swift-proxy-7fc4c495dc-4wmrl\" (UID: \"17fafdee-9ab2-479b-85e0-96e3ef98daa8\") " pod="openstack/swift-proxy-7fc4c495dc-4wmrl" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.258008 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vbql\" (UniqueName: \"kubernetes.io/projected/17fafdee-9ab2-479b-85e0-96e3ef98daa8-kube-api-access-5vbql\") pod \"swift-proxy-7fc4c495dc-4wmrl\" (UID: \"17fafdee-9ab2-479b-85e0-96e3ef98daa8\") " pod="openstack/swift-proxy-7fc4c495dc-4wmrl" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.258114 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17fafdee-9ab2-479b-85e0-96e3ef98daa8-combined-ca-bundle\") pod \"swift-proxy-7fc4c495dc-4wmrl\" (UID: \"17fafdee-9ab2-479b-85e0-96e3ef98daa8\") " pod="openstack/swift-proxy-7fc4c495dc-4wmrl" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.258231 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/17fafdee-9ab2-479b-85e0-96e3ef98daa8-etc-swift\") pod \"swift-proxy-7fc4c495dc-4wmrl\" (UID: \"17fafdee-9ab2-479b-85e0-96e3ef98daa8\") " pod="openstack/swift-proxy-7fc4c495dc-4wmrl" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.258310 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17fafdee-9ab2-479b-85e0-96e3ef98daa8-config-data\") pod \"swift-proxy-7fc4c495dc-4wmrl\" (UID: \"17fafdee-9ab2-479b-85e0-96e3ef98daa8\") " pod="openstack/swift-proxy-7fc4c495dc-4wmrl" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.258425 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17fafdee-9ab2-479b-85e0-96e3ef98daa8-run-httpd\") pod \"swift-proxy-7fc4c495dc-4wmrl\" (UID: \"17fafdee-9ab2-479b-85e0-96e3ef98daa8\") " pod="openstack/swift-proxy-7fc4c495dc-4wmrl" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.258510 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17fafdee-9ab2-479b-85e0-96e3ef98daa8-log-httpd\") pod \"swift-proxy-7fc4c495dc-4wmrl\" (UID: \"17fafdee-9ab2-479b-85e0-96e3ef98daa8\") " pod="openstack/swift-proxy-7fc4c495dc-4wmrl" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.368454 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/17fafdee-9ab2-479b-85e0-96e3ef98daa8-etc-swift\") pod \"swift-proxy-7fc4c495dc-4wmrl\" (UID: \"17fafdee-9ab2-479b-85e0-96e3ef98daa8\") " pod="openstack/swift-proxy-7fc4c495dc-4wmrl" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.368558 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17fafdee-9ab2-479b-85e0-96e3ef98daa8-config-data\") pod \"swift-proxy-7fc4c495dc-4wmrl\" (UID: \"17fafdee-9ab2-479b-85e0-96e3ef98daa8\") " pod="openstack/swift-proxy-7fc4c495dc-4wmrl" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.368654 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17fafdee-9ab2-479b-85e0-96e3ef98daa8-run-httpd\") pod \"swift-proxy-7fc4c495dc-4wmrl\" (UID: \"17fafdee-9ab2-479b-85e0-96e3ef98daa8\") " pod="openstack/swift-proxy-7fc4c495dc-4wmrl" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.368712 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17fafdee-9ab2-479b-85e0-96e3ef98daa8-log-httpd\") pod \"swift-proxy-7fc4c495dc-4wmrl\" (UID: \"17fafdee-9ab2-479b-85e0-96e3ef98daa8\") " pod="openstack/swift-proxy-7fc4c495dc-4wmrl" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.368773 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17fafdee-9ab2-479b-85e0-96e3ef98daa8-internal-tls-certs\") pod \"swift-proxy-7fc4c495dc-4wmrl\" (UID: \"17fafdee-9ab2-479b-85e0-96e3ef98daa8\") " pod="openstack/swift-proxy-7fc4c495dc-4wmrl" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.368821 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17fafdee-9ab2-479b-85e0-96e3ef98daa8-public-tls-certs\") pod \"swift-proxy-7fc4c495dc-4wmrl\" (UID: \"17fafdee-9ab2-479b-85e0-96e3ef98daa8\") " pod="openstack/swift-proxy-7fc4c495dc-4wmrl" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.368888 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vbql\" (UniqueName: \"kubernetes.io/projected/17fafdee-9ab2-479b-85e0-96e3ef98daa8-kube-api-access-5vbql\") pod \"swift-proxy-7fc4c495dc-4wmrl\" (UID: \"17fafdee-9ab2-479b-85e0-96e3ef98daa8\") " pod="openstack/swift-proxy-7fc4c495dc-4wmrl" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.368949 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17fafdee-9ab2-479b-85e0-96e3ef98daa8-combined-ca-bundle\") pod \"swift-proxy-7fc4c495dc-4wmrl\" (UID: \"17fafdee-9ab2-479b-85e0-96e3ef98daa8\") " pod="openstack/swift-proxy-7fc4c495dc-4wmrl" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.369352 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17fafdee-9ab2-479b-85e0-96e3ef98daa8-run-httpd\") pod \"swift-proxy-7fc4c495dc-4wmrl\" (UID: \"17fafdee-9ab2-479b-85e0-96e3ef98daa8\") " pod="openstack/swift-proxy-7fc4c495dc-4wmrl" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.373009 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17fafdee-9ab2-479b-85e0-96e3ef98daa8-combined-ca-bundle\") pod \"swift-proxy-7fc4c495dc-4wmrl\" (UID: \"17fafdee-9ab2-479b-85e0-96e3ef98daa8\") " pod="openstack/swift-proxy-7fc4c495dc-4wmrl" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.373259 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17fafdee-9ab2-479b-85e0-96e3ef98daa8-log-httpd\") pod \"swift-proxy-7fc4c495dc-4wmrl\" (UID: \"17fafdee-9ab2-479b-85e0-96e3ef98daa8\") " pod="openstack/swift-proxy-7fc4c495dc-4wmrl" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.375564 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/17fafdee-9ab2-479b-85e0-96e3ef98daa8-etc-swift\") pod \"swift-proxy-7fc4c495dc-4wmrl\" (UID: \"17fafdee-9ab2-479b-85e0-96e3ef98daa8\") " pod="openstack/swift-proxy-7fc4c495dc-4wmrl" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.376047 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17fafdee-9ab2-479b-85e0-96e3ef98daa8-public-tls-certs\") pod \"swift-proxy-7fc4c495dc-4wmrl\" (UID: \"17fafdee-9ab2-479b-85e0-96e3ef98daa8\") " pod="openstack/swift-proxy-7fc4c495dc-4wmrl" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.381543 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17fafdee-9ab2-479b-85e0-96e3ef98daa8-internal-tls-certs\") pod \"swift-proxy-7fc4c495dc-4wmrl\" (UID: \"17fafdee-9ab2-479b-85e0-96e3ef98daa8\") " pod="openstack/swift-proxy-7fc4c495dc-4wmrl" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.382855 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17fafdee-9ab2-479b-85e0-96e3ef98daa8-config-data\") pod \"swift-proxy-7fc4c495dc-4wmrl\" (UID: \"17fafdee-9ab2-479b-85e0-96e3ef98daa8\") " pod="openstack/swift-proxy-7fc4c495dc-4wmrl" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.400854 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vbql\" (UniqueName: \"kubernetes.io/projected/17fafdee-9ab2-479b-85e0-96e3ef98daa8-kube-api-access-5vbql\") pod \"swift-proxy-7fc4c495dc-4wmrl\" (UID: \"17fafdee-9ab2-479b-85e0-96e3ef98daa8\") " pod="openstack/swift-proxy-7fc4c495dc-4wmrl" Jan 30 07:01:31 crc kubenswrapper[4520]: I0130 07:01:31.506153 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7fc4c495dc-4wmrl" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.217094 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-d2pnr"] Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.221963 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-d2pnr" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.231161 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7fc4c495dc-4wmrl"] Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.264614 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-d2pnr"] Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.287028 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efc58f00-bc50-41a3-ae74-2ed020c0ac1a-operator-scripts\") pod \"nova-api-db-create-d2pnr\" (UID: \"efc58f00-bc50-41a3-ae74-2ed020c0ac1a\") " pod="openstack/nova-api-db-create-d2pnr" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.287068 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h7vf\" (UniqueName: \"kubernetes.io/projected/efc58f00-bc50-41a3-ae74-2ed020c0ac1a-kube-api-access-9h7vf\") pod \"nova-api-db-create-d2pnr\" (UID: \"efc58f00-bc50-41a3-ae74-2ed020c0ac1a\") " pod="openstack/nova-api-db-create-d2pnr" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.322468 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-8dnb5"] Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.323903 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8dnb5" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.345675 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-328b-account-create-update-gh8sv"] Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.347074 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-328b-account-create-update-gh8sv" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.356981 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.362727 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-8dnb5"] Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.382970 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-328b-account-create-update-gh8sv"] Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.389990 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd7bl\" (UniqueName: \"kubernetes.io/projected/161e12d5-a5ef-44fd-ae2c-7b76c39202eb-kube-api-access-gd7bl\") pod \"nova-api-328b-account-create-update-gh8sv\" (UID: \"161e12d5-a5ef-44fd-ae2c-7b76c39202eb\") " pod="openstack/nova-api-328b-account-create-update-gh8sv" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.390065 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efc58f00-bc50-41a3-ae74-2ed020c0ac1a-operator-scripts\") pod \"nova-api-db-create-d2pnr\" (UID: \"efc58f00-bc50-41a3-ae74-2ed020c0ac1a\") " pod="openstack/nova-api-db-create-d2pnr" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.390091 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h7vf\" (UniqueName: \"kubernetes.io/projected/efc58f00-bc50-41a3-ae74-2ed020c0ac1a-kube-api-access-9h7vf\") pod \"nova-api-db-create-d2pnr\" (UID: \"efc58f00-bc50-41a3-ae74-2ed020c0ac1a\") " pod="openstack/nova-api-db-create-d2pnr" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.390159 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/495b2abf-5190-409c-85b0-e6dcbef9ceaf-operator-scripts\") pod \"nova-cell0-db-create-8dnb5\" (UID: \"495b2abf-5190-409c-85b0-e6dcbef9ceaf\") " pod="openstack/nova-cell0-db-create-8dnb5" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.390206 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrv2l\" (UniqueName: \"kubernetes.io/projected/495b2abf-5190-409c-85b0-e6dcbef9ceaf-kube-api-access-hrv2l\") pod \"nova-cell0-db-create-8dnb5\" (UID: \"495b2abf-5190-409c-85b0-e6dcbef9ceaf\") " pod="openstack/nova-cell0-db-create-8dnb5" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.390244 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/161e12d5-a5ef-44fd-ae2c-7b76c39202eb-operator-scripts\") pod \"nova-api-328b-account-create-update-gh8sv\" (UID: \"161e12d5-a5ef-44fd-ae2c-7b76c39202eb\") " pod="openstack/nova-api-328b-account-create-update-gh8sv" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.390931 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efc58f00-bc50-41a3-ae74-2ed020c0ac1a-operator-scripts\") pod \"nova-api-db-create-d2pnr\" (UID: \"efc58f00-bc50-41a3-ae74-2ed020c0ac1a\") " pod="openstack/nova-api-db-create-d2pnr" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.413936 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.425007 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h7vf\" (UniqueName: \"kubernetes.io/projected/efc58f00-bc50-41a3-ae74-2ed020c0ac1a-kube-api-access-9h7vf\") pod \"nova-api-db-create-d2pnr\" (UID: \"efc58f00-bc50-41a3-ae74-2ed020c0ac1a\") " pod="openstack/nova-api-db-create-d2pnr" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.456340 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.456740 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4b1a05da-505e-4ad3-8aba-596235eba06c" containerName="ceilometer-central-agent" containerID="cri-o://22745bd991aa16dc658bda284f170f8341b43f623a562ff5c3a49e31c372ad4a" gracePeriod=30 Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.457140 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4b1a05da-505e-4ad3-8aba-596235eba06c" containerName="proxy-httpd" containerID="cri-o://0fe1e97f4ca7fc31f5df6ca0c088afa106853f66614658702622686d0ec052a8" gracePeriod=30 Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.457555 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4b1a05da-505e-4ad3-8aba-596235eba06c" containerName="ceilometer-notification-agent" containerID="cri-o://478c1e8e5bfe174ffbe6a8456374f8d97d5a9f6ab2106e33fd9d45af1c2c134f" gracePeriod=30 Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.457601 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4b1a05da-505e-4ad3-8aba-596235eba06c" containerName="sg-core" containerID="cri-o://6c7a741445971bf34cb83b6e6992059398c897d50c82940a95b1604b01e63d14" gracePeriod=30 Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.494118 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/495b2abf-5190-409c-85b0-e6dcbef9ceaf-operator-scripts\") pod \"nova-cell0-db-create-8dnb5\" (UID: \"495b2abf-5190-409c-85b0-e6dcbef9ceaf\") " pod="openstack/nova-cell0-db-create-8dnb5" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.494216 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrv2l\" (UniqueName: \"kubernetes.io/projected/495b2abf-5190-409c-85b0-e6dcbef9ceaf-kube-api-access-hrv2l\") pod \"nova-cell0-db-create-8dnb5\" (UID: \"495b2abf-5190-409c-85b0-e6dcbef9ceaf\") " pod="openstack/nova-cell0-db-create-8dnb5" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.494283 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/161e12d5-a5ef-44fd-ae2c-7b76c39202eb-operator-scripts\") pod \"nova-api-328b-account-create-update-gh8sv\" (UID: \"161e12d5-a5ef-44fd-ae2c-7b76c39202eb\") " pod="openstack/nova-api-328b-account-create-update-gh8sv" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.494401 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd7bl\" (UniqueName: \"kubernetes.io/projected/161e12d5-a5ef-44fd-ae2c-7b76c39202eb-kube-api-access-gd7bl\") pod \"nova-api-328b-account-create-update-gh8sv\" (UID: \"161e12d5-a5ef-44fd-ae2c-7b76c39202eb\") " pod="openstack/nova-api-328b-account-create-update-gh8sv" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.495763 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/161e12d5-a5ef-44fd-ae2c-7b76c39202eb-operator-scripts\") pod \"nova-api-328b-account-create-update-gh8sv\" (UID: \"161e12d5-a5ef-44fd-ae2c-7b76c39202eb\") " pod="openstack/nova-api-328b-account-create-update-gh8sv" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.496322 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/495b2abf-5190-409c-85b0-e6dcbef9ceaf-operator-scripts\") pod \"nova-cell0-db-create-8dnb5\" (UID: \"495b2abf-5190-409c-85b0-e6dcbef9ceaf\") " pod="openstack/nova-cell0-db-create-8dnb5" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.517855 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-kbm45"] Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.519807 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kbm45" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.520995 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrv2l\" (UniqueName: \"kubernetes.io/projected/495b2abf-5190-409c-85b0-e6dcbef9ceaf-kube-api-access-hrv2l\") pod \"nova-cell0-db-create-8dnb5\" (UID: \"495b2abf-5190-409c-85b0-e6dcbef9ceaf\") " pod="openstack/nova-cell0-db-create-8dnb5" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.545798 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-d2pnr" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.545807 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd7bl\" (UniqueName: \"kubernetes.io/projected/161e12d5-a5ef-44fd-ae2c-7b76c39202eb-kube-api-access-gd7bl\") pod \"nova-api-328b-account-create-update-gh8sv\" (UID: \"161e12d5-a5ef-44fd-ae2c-7b76c39202eb\") " pod="openstack/nova-api-328b-account-create-update-gh8sv" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.569013 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-kbm45"] Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.591792 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-006f-account-create-update-xbt7p"] Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.594767 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-006f-account-create-update-xbt7p" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.597719 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dac5b11b-fec5-471b-8b93-5755e9adbf6a-operator-scripts\") pod \"nova-cell1-db-create-kbm45\" (UID: \"dac5b11b-fec5-471b-8b93-5755e9adbf6a\") " pod="openstack/nova-cell1-db-create-kbm45" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.597922 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tqnc\" (UniqueName: \"kubernetes.io/projected/dac5b11b-fec5-471b-8b93-5755e9adbf6a-kube-api-access-9tqnc\") pod \"nova-cell1-db-create-kbm45\" (UID: \"dac5b11b-fec5-471b-8b93-5755e9adbf6a\") " pod="openstack/nova-cell1-db-create-kbm45" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.601110 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.629437 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-006f-account-create-update-xbt7p"] Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.701895 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dac5b11b-fec5-471b-8b93-5755e9adbf6a-operator-scripts\") pod \"nova-cell1-db-create-kbm45\" (UID: \"dac5b11b-fec5-471b-8b93-5755e9adbf6a\") " pod="openstack/nova-cell1-db-create-kbm45" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.702038 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz6r8\" (UniqueName: \"kubernetes.io/projected/b22ec224-5158-47c9-bb3e-70a93147f671-kube-api-access-bz6r8\") pod \"nova-cell0-006f-account-create-update-xbt7p\" (UID: \"b22ec224-5158-47c9-bb3e-70a93147f671\") " pod="openstack/nova-cell0-006f-account-create-update-xbt7p" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.702099 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b22ec224-5158-47c9-bb3e-70a93147f671-operator-scripts\") pod \"nova-cell0-006f-account-create-update-xbt7p\" (UID: \"b22ec224-5158-47c9-bb3e-70a93147f671\") " pod="openstack/nova-cell0-006f-account-create-update-xbt7p" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.702147 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tqnc\" (UniqueName: \"kubernetes.io/projected/dac5b11b-fec5-471b-8b93-5755e9adbf6a-kube-api-access-9tqnc\") pod \"nova-cell1-db-create-kbm45\" (UID: \"dac5b11b-fec5-471b-8b93-5755e9adbf6a\") " pod="openstack/nova-cell1-db-create-kbm45" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.703573 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dac5b11b-fec5-471b-8b93-5755e9adbf6a-operator-scripts\") pod \"nova-cell1-db-create-kbm45\" (UID: \"dac5b11b-fec5-471b-8b93-5755e9adbf6a\") " pod="openstack/nova-cell1-db-create-kbm45" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.715601 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8dnb5" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.728476 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-328b-account-create-update-gh8sv" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.730147 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tqnc\" (UniqueName: \"kubernetes.io/projected/dac5b11b-fec5-471b-8b93-5755e9adbf6a-kube-api-access-9tqnc\") pod \"nova-cell1-db-create-kbm45\" (UID: \"dac5b11b-fec5-471b-8b93-5755e9adbf6a\") " pod="openstack/nova-cell1-db-create-kbm45" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.758354 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cd8643f-309c-46b4-bc83-4a8548e98403" path="/var/lib/kubelet/pods/2cd8643f-309c-46b4-bc83-4a8548e98403/volumes" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.761144 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-f0fc-account-create-update-r729j"] Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.764496 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f0fc-account-create-update-r729j" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.773361 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.805950 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xnsz\" (UniqueName: \"kubernetes.io/projected/4fa64730-e3ba-46af-8849-f4f38170b71a-kube-api-access-6xnsz\") pod \"nova-cell1-f0fc-account-create-update-r729j\" (UID: \"4fa64730-e3ba-46af-8849-f4f38170b71a\") " pod="openstack/nova-cell1-f0fc-account-create-update-r729j" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.806292 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fa64730-e3ba-46af-8849-f4f38170b71a-operator-scripts\") pod \"nova-cell1-f0fc-account-create-update-r729j\" (UID: \"4fa64730-e3ba-46af-8849-f4f38170b71a\") " pod="openstack/nova-cell1-f0fc-account-create-update-r729j" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.806379 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz6r8\" (UniqueName: \"kubernetes.io/projected/b22ec224-5158-47c9-bb3e-70a93147f671-kube-api-access-bz6r8\") pod \"nova-cell0-006f-account-create-update-xbt7p\" (UID: \"b22ec224-5158-47c9-bb3e-70a93147f671\") " pod="openstack/nova-cell0-006f-account-create-update-xbt7p" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.806422 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b22ec224-5158-47c9-bb3e-70a93147f671-operator-scripts\") pod \"nova-cell0-006f-account-create-update-xbt7p\" (UID: \"b22ec224-5158-47c9-bb3e-70a93147f671\") " pod="openstack/nova-cell0-006f-account-create-update-xbt7p" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.822022 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b22ec224-5158-47c9-bb3e-70a93147f671-operator-scripts\") pod \"nova-cell0-006f-account-create-update-xbt7p\" (UID: \"b22ec224-5158-47c9-bb3e-70a93147f671\") " pod="openstack/nova-cell0-006f-account-create-update-xbt7p" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.832094 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-f0fc-account-create-update-r729j"] Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.833614 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz6r8\" (UniqueName: \"kubernetes.io/projected/b22ec224-5158-47c9-bb3e-70a93147f671-kube-api-access-bz6r8\") pod \"nova-cell0-006f-account-create-update-xbt7p\" (UID: \"b22ec224-5158-47c9-bb3e-70a93147f671\") " pod="openstack/nova-cell0-006f-account-create-update-xbt7p" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.866395 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kbm45" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.922631 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fa64730-e3ba-46af-8849-f4f38170b71a-operator-scripts\") pod \"nova-cell1-f0fc-account-create-update-r729j\" (UID: \"4fa64730-e3ba-46af-8849-f4f38170b71a\") " pod="openstack/nova-cell1-f0fc-account-create-update-r729j" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.924907 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xnsz\" (UniqueName: \"kubernetes.io/projected/4fa64730-e3ba-46af-8849-f4f38170b71a-kube-api-access-6xnsz\") pod \"nova-cell1-f0fc-account-create-update-r729j\" (UID: \"4fa64730-e3ba-46af-8849-f4f38170b71a\") " pod="openstack/nova-cell1-f0fc-account-create-update-r729j" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.926284 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fa64730-e3ba-46af-8849-f4f38170b71a-operator-scripts\") pod \"nova-cell1-f0fc-account-create-update-r729j\" (UID: \"4fa64730-e3ba-46af-8849-f4f38170b71a\") " pod="openstack/nova-cell1-f0fc-account-create-update-r729j" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.955625 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-006f-account-create-update-xbt7p" Jan 30 07:01:32 crc kubenswrapper[4520]: I0130 07:01:32.997730 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xnsz\" (UniqueName: \"kubernetes.io/projected/4fa64730-e3ba-46af-8849-f4f38170b71a-kube-api-access-6xnsz\") pod \"nova-cell1-f0fc-account-create-update-r729j\" (UID: \"4fa64730-e3ba-46af-8849-f4f38170b71a\") " pod="openstack/nova-cell1-f0fc-account-create-update-r729j" Jan 30 07:01:33 crc kubenswrapper[4520]: I0130 07:01:33.101560 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7fc4c495dc-4wmrl" event={"ID":"17fafdee-9ab2-479b-85e0-96e3ef98daa8","Type":"ContainerStarted","Data":"d4c9bc6a20fa6846d56aab9dee46388d99489cad5ac7a22ae020a789d399f607"} Jan 30 07:01:33 crc kubenswrapper[4520]: I0130 07:01:33.101636 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7fc4c495dc-4wmrl" event={"ID":"17fafdee-9ab2-479b-85e0-96e3ef98daa8","Type":"ContainerStarted","Data":"629547be79a279af4d17d72192e270a21cc114bb0e5e5cf759bce27f39526e11"} Jan 30 07:01:33 crc kubenswrapper[4520]: I0130 07:01:33.102241 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f0fc-account-create-update-r729j" Jan 30 07:01:33 crc kubenswrapper[4520]: I0130 07:01:33.109387 4520 generic.go:334] "Generic (PLEG): container finished" podID="4b1a05da-505e-4ad3-8aba-596235eba06c" containerID="0fe1e97f4ca7fc31f5df6ca0c088afa106853f66614658702622686d0ec052a8" exitCode=0 Jan 30 07:01:33 crc kubenswrapper[4520]: I0130 07:01:33.109410 4520 generic.go:334] "Generic (PLEG): container finished" podID="4b1a05da-505e-4ad3-8aba-596235eba06c" containerID="6c7a741445971bf34cb83b6e6992059398c897d50c82940a95b1604b01e63d14" exitCode=2 Jan 30 07:01:33 crc kubenswrapper[4520]: I0130 07:01:33.109420 4520 generic.go:334] "Generic (PLEG): container finished" podID="4b1a05da-505e-4ad3-8aba-596235eba06c" containerID="22745bd991aa16dc658bda284f170f8341b43f623a562ff5c3a49e31c372ad4a" exitCode=0 Jan 30 07:01:33 crc kubenswrapper[4520]: I0130 07:01:33.109436 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b1a05da-505e-4ad3-8aba-596235eba06c","Type":"ContainerDied","Data":"0fe1e97f4ca7fc31f5df6ca0c088afa106853f66614658702622686d0ec052a8"} Jan 30 07:01:33 crc kubenswrapper[4520]: I0130 07:01:33.109467 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b1a05da-505e-4ad3-8aba-596235eba06c","Type":"ContainerDied","Data":"6c7a741445971bf34cb83b6e6992059398c897d50c82940a95b1604b01e63d14"} Jan 30 07:01:33 crc kubenswrapper[4520]: I0130 07:01:33.109481 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b1a05da-505e-4ad3-8aba-596235eba06c","Type":"ContainerDied","Data":"22745bd991aa16dc658bda284f170f8341b43f623a562ff5c3a49e31c372ad4a"} Jan 30 07:01:33 crc kubenswrapper[4520]: I0130 07:01:33.218689 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-d2pnr"] Jan 30 07:01:33 crc kubenswrapper[4520]: I0130 07:01:33.519622 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-328b-account-create-update-gh8sv"] Jan 30 07:01:33 crc kubenswrapper[4520]: W0130 07:01:33.555357 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod161e12d5_a5ef_44fd_ae2c_7b76c39202eb.slice/crio-9d2935c261e903c53ad7d1931acf79a765cdb53909ac1aa50d5b671f1c244625 WatchSource:0}: Error finding container 9d2935c261e903c53ad7d1931acf79a765cdb53909ac1aa50d5b671f1c244625: Status 404 returned error can't find the container with id 9d2935c261e903c53ad7d1931acf79a765cdb53909ac1aa50d5b671f1c244625 Jan 30 07:01:33 crc kubenswrapper[4520]: I0130 07:01:33.658355 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-kbm45"] Jan 30 07:01:33 crc kubenswrapper[4520]: I0130 07:01:33.817647 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-8dnb5"] Jan 30 07:01:33 crc kubenswrapper[4520]: W0130 07:01:33.820473 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod495b2abf_5190_409c_85b0_e6dcbef9ceaf.slice/crio-bd4fbed02727367f64e00acbf302a2f16ca73433bec3963861eb3280ffdbbaaf WatchSource:0}: Error finding container bd4fbed02727367f64e00acbf302a2f16ca73433bec3963861eb3280ffdbbaaf: Status 404 returned error can't find the container with id bd4fbed02727367f64e00acbf302a2f16ca73433bec3963861eb3280ffdbbaaf Jan 30 07:01:33 crc kubenswrapper[4520]: I0130 07:01:33.830967 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-006f-account-create-update-xbt7p"] Jan 30 07:01:34 crc kubenswrapper[4520]: I0130 07:01:34.002562 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-f0fc-account-create-update-r729j"] Jan 30 07:01:34 crc kubenswrapper[4520]: I0130 07:01:34.150899 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-328b-account-create-update-gh8sv" event={"ID":"161e12d5-a5ef-44fd-ae2c-7b76c39202eb","Type":"ContainerStarted","Data":"5c17e12724ac3abe596329d4b8fe0e2cf1d706e4940192903c766a0b189667ac"} Jan 30 07:01:34 crc kubenswrapper[4520]: I0130 07:01:34.150952 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-328b-account-create-update-gh8sv" event={"ID":"161e12d5-a5ef-44fd-ae2c-7b76c39202eb","Type":"ContainerStarted","Data":"9d2935c261e903c53ad7d1931acf79a765cdb53909ac1aa50d5b671f1c244625"} Jan 30 07:01:34 crc kubenswrapper[4520]: I0130 07:01:34.159148 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-006f-account-create-update-xbt7p" event={"ID":"b22ec224-5158-47c9-bb3e-70a93147f671","Type":"ContainerStarted","Data":"138cf1678322a3c17b1b1c29b8b0a94da063183db84d7fee8659cac9c428b8c4"} Jan 30 07:01:34 crc kubenswrapper[4520]: I0130 07:01:34.163624 4520 generic.go:334] "Generic (PLEG): container finished" podID="efc58f00-bc50-41a3-ae74-2ed020c0ac1a" containerID="648d00aa9bf7d684cae40e3f61a3761b2af70ff04edff4dd1f6278577713ff57" exitCode=0 Jan 30 07:01:34 crc kubenswrapper[4520]: I0130 07:01:34.163803 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-d2pnr" event={"ID":"efc58f00-bc50-41a3-ae74-2ed020c0ac1a","Type":"ContainerDied","Data":"648d00aa9bf7d684cae40e3f61a3761b2af70ff04edff4dd1f6278577713ff57"} Jan 30 07:01:34 crc kubenswrapper[4520]: I0130 07:01:34.163832 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-d2pnr" event={"ID":"efc58f00-bc50-41a3-ae74-2ed020c0ac1a","Type":"ContainerStarted","Data":"4d1903c446fe875ca9465b1d520253c12a09ea41f7fc25e181e35871bd831601"} Jan 30 07:01:34 crc kubenswrapper[4520]: I0130 07:01:34.170672 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-328b-account-create-update-gh8sv" podStartSLOduration=2.170652443 podStartE2EDuration="2.170652443s" podCreationTimestamp="2026-01-30 07:01:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:01:34.167907281 +0000 UTC m=+1007.796259463" watchObservedRunningTime="2026-01-30 07:01:34.170652443 +0000 UTC m=+1007.799004624" Jan 30 07:01:34 crc kubenswrapper[4520]: I0130 07:01:34.171145 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-c459697cb-g922m" podUID="3380703e-5659-4040-8b43-e3ada0eaa6b6" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 30 07:01:34 crc kubenswrapper[4520]: I0130 07:01:34.174589 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8dnb5" event={"ID":"495b2abf-5190-409c-85b0-e6dcbef9ceaf","Type":"ContainerStarted","Data":"bd4fbed02727367f64e00acbf302a2f16ca73433bec3963861eb3280ffdbbaaf"} Jan 30 07:01:34 crc kubenswrapper[4520]: I0130 07:01:34.191029 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7fc4c495dc-4wmrl" event={"ID":"17fafdee-9ab2-479b-85e0-96e3ef98daa8","Type":"ContainerStarted","Data":"53b988d806973256f2d7e9dba8f6333199f65a5b39dce68541f16bfa59dfec09"} Jan 30 07:01:34 crc kubenswrapper[4520]: I0130 07:01:34.191134 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7fc4c495dc-4wmrl" Jan 30 07:01:34 crc kubenswrapper[4520]: I0130 07:01:34.191407 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7fc4c495dc-4wmrl" Jan 30 07:01:34 crc kubenswrapper[4520]: I0130 07:01:34.194832 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f0fc-account-create-update-r729j" event={"ID":"4fa64730-e3ba-46af-8849-f4f38170b71a","Type":"ContainerStarted","Data":"2967f78daa5e50a1f8bd22da58e4ca03e644a0587de24a1393a3eec8ed39a876"} Jan 30 07:01:34 crc kubenswrapper[4520]: I0130 07:01:34.206865 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kbm45" event={"ID":"dac5b11b-fec5-471b-8b93-5755e9adbf6a","Type":"ContainerStarted","Data":"cd562a670606b26c7b8f8e5f6f745115aa2c7e3fe743a0ebd86f1ec034037538"} Jan 30 07:01:34 crc kubenswrapper[4520]: I0130 07:01:34.212931 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7fc4c495dc-4wmrl" podStartSLOduration=3.212919879 podStartE2EDuration="3.212919879s" podCreationTimestamp="2026-01-30 07:01:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:01:34.206946157 +0000 UTC m=+1007.835298338" watchObservedRunningTime="2026-01-30 07:01:34.212919879 +0000 UTC m=+1007.841272059" Jan 30 07:01:34 crc kubenswrapper[4520]: I0130 07:01:34.912946 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.047418 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhr26\" (UniqueName: \"kubernetes.io/projected/4b1a05da-505e-4ad3-8aba-596235eba06c-kube-api-access-bhr26\") pod \"4b1a05da-505e-4ad3-8aba-596235eba06c\" (UID: \"4b1a05da-505e-4ad3-8aba-596235eba06c\") " Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.047498 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b1a05da-505e-4ad3-8aba-596235eba06c-combined-ca-bundle\") pod \"4b1a05da-505e-4ad3-8aba-596235eba06c\" (UID: \"4b1a05da-505e-4ad3-8aba-596235eba06c\") " Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.047570 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b1a05da-505e-4ad3-8aba-596235eba06c-log-httpd\") pod \"4b1a05da-505e-4ad3-8aba-596235eba06c\" (UID: \"4b1a05da-505e-4ad3-8aba-596235eba06c\") " Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.047646 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b1a05da-505e-4ad3-8aba-596235eba06c-scripts\") pod \"4b1a05da-505e-4ad3-8aba-596235eba06c\" (UID: \"4b1a05da-505e-4ad3-8aba-596235eba06c\") " Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.047822 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b1a05da-505e-4ad3-8aba-596235eba06c-run-httpd\") pod \"4b1a05da-505e-4ad3-8aba-596235eba06c\" (UID: \"4b1a05da-505e-4ad3-8aba-596235eba06c\") " Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.047910 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b1a05da-505e-4ad3-8aba-596235eba06c-config-data\") pod \"4b1a05da-505e-4ad3-8aba-596235eba06c\" (UID: \"4b1a05da-505e-4ad3-8aba-596235eba06c\") " Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.048052 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b1a05da-505e-4ad3-8aba-596235eba06c-sg-core-conf-yaml\") pod \"4b1a05da-505e-4ad3-8aba-596235eba06c\" (UID: \"4b1a05da-505e-4ad3-8aba-596235eba06c\") " Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.050116 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b1a05da-505e-4ad3-8aba-596235eba06c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4b1a05da-505e-4ad3-8aba-596235eba06c" (UID: "4b1a05da-505e-4ad3-8aba-596235eba06c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.050418 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b1a05da-505e-4ad3-8aba-596235eba06c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4b1a05da-505e-4ad3-8aba-596235eba06c" (UID: "4b1a05da-505e-4ad3-8aba-596235eba06c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.058656 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b1a05da-505e-4ad3-8aba-596235eba06c-scripts" (OuterVolumeSpecName: "scripts") pod "4b1a05da-505e-4ad3-8aba-596235eba06c" (UID: "4b1a05da-505e-4ad3-8aba-596235eba06c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.058725 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b1a05da-505e-4ad3-8aba-596235eba06c-kube-api-access-bhr26" (OuterVolumeSpecName: "kube-api-access-bhr26") pod "4b1a05da-505e-4ad3-8aba-596235eba06c" (UID: "4b1a05da-505e-4ad3-8aba-596235eba06c"). InnerVolumeSpecName "kube-api-access-bhr26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.101613 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b1a05da-505e-4ad3-8aba-596235eba06c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4b1a05da-505e-4ad3-8aba-596235eba06c" (UID: "4b1a05da-505e-4ad3-8aba-596235eba06c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.111666 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b1a05da-505e-4ad3-8aba-596235eba06c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b1a05da-505e-4ad3-8aba-596235eba06c" (UID: "4b1a05da-505e-4ad3-8aba-596235eba06c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.151881 4520 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b1a05da-505e-4ad3-8aba-596235eba06c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.151920 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhr26\" (UniqueName: \"kubernetes.io/projected/4b1a05da-505e-4ad3-8aba-596235eba06c-kube-api-access-bhr26\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.151935 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b1a05da-505e-4ad3-8aba-596235eba06c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.151947 4520 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b1a05da-505e-4ad3-8aba-596235eba06c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.151958 4520 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b1a05da-505e-4ad3-8aba-596235eba06c-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.151968 4520 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b1a05da-505e-4ad3-8aba-596235eba06c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.152548 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b1a05da-505e-4ad3-8aba-596235eba06c-config-data" (OuterVolumeSpecName: "config-data") pod "4b1a05da-505e-4ad3-8aba-596235eba06c" (UID: "4b1a05da-505e-4ad3-8aba-596235eba06c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.193026 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7hthh"] Jan 30 07:01:35 crc kubenswrapper[4520]: E0130 07:01:35.193438 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b1a05da-505e-4ad3-8aba-596235eba06c" containerName="ceilometer-central-agent" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.193452 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b1a05da-505e-4ad3-8aba-596235eba06c" containerName="ceilometer-central-agent" Jan 30 07:01:35 crc kubenswrapper[4520]: E0130 07:01:35.193474 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b1a05da-505e-4ad3-8aba-596235eba06c" containerName="proxy-httpd" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.193479 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b1a05da-505e-4ad3-8aba-596235eba06c" containerName="proxy-httpd" Jan 30 07:01:35 crc kubenswrapper[4520]: E0130 07:01:35.193492 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b1a05da-505e-4ad3-8aba-596235eba06c" containerName="sg-core" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.193498 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b1a05da-505e-4ad3-8aba-596235eba06c" containerName="sg-core" Jan 30 07:01:35 crc kubenswrapper[4520]: E0130 07:01:35.193510 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b1a05da-505e-4ad3-8aba-596235eba06c" containerName="ceilometer-notification-agent" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.196977 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b1a05da-505e-4ad3-8aba-596235eba06c" containerName="ceilometer-notification-agent" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.197261 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b1a05da-505e-4ad3-8aba-596235eba06c" containerName="ceilometer-central-agent" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.197279 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b1a05da-505e-4ad3-8aba-596235eba06c" containerName="sg-core" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.197307 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b1a05da-505e-4ad3-8aba-596235eba06c" containerName="ceilometer-notification-agent" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.197318 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b1a05da-505e-4ad3-8aba-596235eba06c" containerName="proxy-httpd" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.198868 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7hthh" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.222430 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7hthh"] Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.243898 4520 generic.go:334] "Generic (PLEG): container finished" podID="495b2abf-5190-409c-85b0-e6dcbef9ceaf" containerID="638836148d5e03844d89be689272833795cba79bfcf01cd5b104693429f721c6" exitCode=0 Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.244106 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8dnb5" event={"ID":"495b2abf-5190-409c-85b0-e6dcbef9ceaf","Type":"ContainerDied","Data":"638836148d5e03844d89be689272833795cba79bfcf01cd5b104693429f721c6"} Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.255613 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b1a05da-505e-4ad3-8aba-596235eba06c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.256133 4520 generic.go:334] "Generic (PLEG): container finished" podID="4fa64730-e3ba-46af-8849-f4f38170b71a" containerID="31e05604b9996f0edc15611d6cbc37f4a9c70393866a096aefd9b99e7057c7e9" exitCode=0 Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.256186 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f0fc-account-create-update-r729j" event={"ID":"4fa64730-e3ba-46af-8849-f4f38170b71a","Type":"ContainerDied","Data":"31e05604b9996f0edc15611d6cbc37f4a9c70393866a096aefd9b99e7057c7e9"} Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.270047 4520 generic.go:334] "Generic (PLEG): container finished" podID="4b1a05da-505e-4ad3-8aba-596235eba06c" containerID="478c1e8e5bfe174ffbe6a8456374f8d97d5a9f6ab2106e33fd9d45af1c2c134f" exitCode=0 Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.270100 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b1a05da-505e-4ad3-8aba-596235eba06c","Type":"ContainerDied","Data":"478c1e8e5bfe174ffbe6a8456374f8d97d5a9f6ab2106e33fd9d45af1c2c134f"} Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.270118 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b1a05da-505e-4ad3-8aba-596235eba06c","Type":"ContainerDied","Data":"bcbccd5f55858e991718eb20b367739f6b1dbf065e5dd2841123d25206bc437b"} Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.270140 4520 scope.go:117] "RemoveContainer" containerID="0fe1e97f4ca7fc31f5df6ca0c088afa106853f66614658702622686d0ec052a8" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.270296 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.281735 4520 generic.go:334] "Generic (PLEG): container finished" podID="dac5b11b-fec5-471b-8b93-5755e9adbf6a" containerID="2badfe22aa20192f0acd3b1e47d272ec04921189361d6fb7c7e4b9b7da91ed9f" exitCode=0 Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.281843 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kbm45" event={"ID":"dac5b11b-fec5-471b-8b93-5755e9adbf6a","Type":"ContainerDied","Data":"2badfe22aa20192f0acd3b1e47d272ec04921189361d6fb7c7e4b9b7da91ed9f"} Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.293879 4520 generic.go:334] "Generic (PLEG): container finished" podID="161e12d5-a5ef-44fd-ae2c-7b76c39202eb" containerID="5c17e12724ac3abe596329d4b8fe0e2cf1d706e4940192903c766a0b189667ac" exitCode=0 Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.293957 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-328b-account-create-update-gh8sv" event={"ID":"161e12d5-a5ef-44fd-ae2c-7b76c39202eb","Type":"ContainerDied","Data":"5c17e12724ac3abe596329d4b8fe0e2cf1d706e4940192903c766a0b189667ac"} Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.296638 4520 generic.go:334] "Generic (PLEG): container finished" podID="b22ec224-5158-47c9-bb3e-70a93147f671" containerID="5e08fba8ff43657b3eec2fabb594b131c029cb860ee9f0d6053de5ae0b8b3bc3" exitCode=0 Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.297101 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-006f-account-create-update-xbt7p" event={"ID":"b22ec224-5158-47c9-bb3e-70a93147f671","Type":"ContainerDied","Data":"5e08fba8ff43657b3eec2fabb594b131c029cb860ee9f0d6053de5ae0b8b3bc3"} Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.356487 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hn48\" (UniqueName: \"kubernetes.io/projected/843b1d9d-26f2-42d5-b8ff-331b66efd5f8-kube-api-access-2hn48\") pod \"certified-operators-7hthh\" (UID: \"843b1d9d-26f2-42d5-b8ff-331b66efd5f8\") " pod="openshift-marketplace/certified-operators-7hthh" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.356563 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/843b1d9d-26f2-42d5-b8ff-331b66efd5f8-utilities\") pod \"certified-operators-7hthh\" (UID: \"843b1d9d-26f2-42d5-b8ff-331b66efd5f8\") " pod="openshift-marketplace/certified-operators-7hthh" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.356613 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/843b1d9d-26f2-42d5-b8ff-331b66efd5f8-catalog-content\") pod \"certified-operators-7hthh\" (UID: \"843b1d9d-26f2-42d5-b8ff-331b66efd5f8\") " pod="openshift-marketplace/certified-operators-7hthh" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.358270 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.378563 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.389293 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.392436 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.399589 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.403389 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.403566 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.459463 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9424be29-ccf4-449c-ad6a-dae1997dd5ab-run-httpd\") pod \"ceilometer-0\" (UID: \"9424be29-ccf4-449c-ad6a-dae1997dd5ab\") " pod="openstack/ceilometer-0" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.459592 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9424be29-ccf4-449c-ad6a-dae1997dd5ab-scripts\") pod \"ceilometer-0\" (UID: \"9424be29-ccf4-449c-ad6a-dae1997dd5ab\") " pod="openstack/ceilometer-0" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.459618 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9424be29-ccf4-449c-ad6a-dae1997dd5ab-log-httpd\") pod \"ceilometer-0\" (UID: \"9424be29-ccf4-449c-ad6a-dae1997dd5ab\") " pod="openstack/ceilometer-0" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.459649 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9424be29-ccf4-449c-ad6a-dae1997dd5ab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9424be29-ccf4-449c-ad6a-dae1997dd5ab\") " pod="openstack/ceilometer-0" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.459676 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtlpz\" (UniqueName: \"kubernetes.io/projected/9424be29-ccf4-449c-ad6a-dae1997dd5ab-kube-api-access-wtlpz\") pod \"ceilometer-0\" (UID: \"9424be29-ccf4-449c-ad6a-dae1997dd5ab\") " pod="openstack/ceilometer-0" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.459702 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hn48\" (UniqueName: \"kubernetes.io/projected/843b1d9d-26f2-42d5-b8ff-331b66efd5f8-kube-api-access-2hn48\") pod \"certified-operators-7hthh\" (UID: \"843b1d9d-26f2-42d5-b8ff-331b66efd5f8\") " pod="openshift-marketplace/certified-operators-7hthh" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.459770 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9424be29-ccf4-449c-ad6a-dae1997dd5ab-config-data\") pod \"ceilometer-0\" (UID: \"9424be29-ccf4-449c-ad6a-dae1997dd5ab\") " pod="openstack/ceilometer-0" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.459794 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/843b1d9d-26f2-42d5-b8ff-331b66efd5f8-utilities\") pod \"certified-operators-7hthh\" (UID: \"843b1d9d-26f2-42d5-b8ff-331b66efd5f8\") " pod="openshift-marketplace/certified-operators-7hthh" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.459866 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/843b1d9d-26f2-42d5-b8ff-331b66efd5f8-catalog-content\") pod \"certified-operators-7hthh\" (UID: \"843b1d9d-26f2-42d5-b8ff-331b66efd5f8\") " pod="openshift-marketplace/certified-operators-7hthh" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.459949 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9424be29-ccf4-449c-ad6a-dae1997dd5ab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9424be29-ccf4-449c-ad6a-dae1997dd5ab\") " pod="openstack/ceilometer-0" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.461764 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/843b1d9d-26f2-42d5-b8ff-331b66efd5f8-catalog-content\") pod \"certified-operators-7hthh\" (UID: \"843b1d9d-26f2-42d5-b8ff-331b66efd5f8\") " pod="openshift-marketplace/certified-operators-7hthh" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.461826 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/843b1d9d-26f2-42d5-b8ff-331b66efd5f8-utilities\") pod \"certified-operators-7hthh\" (UID: \"843b1d9d-26f2-42d5-b8ff-331b66efd5f8\") " pod="openshift-marketplace/certified-operators-7hthh" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.480288 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hn48\" (UniqueName: \"kubernetes.io/projected/843b1d9d-26f2-42d5-b8ff-331b66efd5f8-kube-api-access-2hn48\") pod \"certified-operators-7hthh\" (UID: \"843b1d9d-26f2-42d5-b8ff-331b66efd5f8\") " pod="openshift-marketplace/certified-operators-7hthh" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.513746 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7hthh" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.563902 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9424be29-ccf4-449c-ad6a-dae1997dd5ab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9424be29-ccf4-449c-ad6a-dae1997dd5ab\") " pod="openstack/ceilometer-0" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.565018 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9424be29-ccf4-449c-ad6a-dae1997dd5ab-run-httpd\") pod \"ceilometer-0\" (UID: \"9424be29-ccf4-449c-ad6a-dae1997dd5ab\") " pod="openstack/ceilometer-0" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.565181 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9424be29-ccf4-449c-ad6a-dae1997dd5ab-scripts\") pod \"ceilometer-0\" (UID: \"9424be29-ccf4-449c-ad6a-dae1997dd5ab\") " pod="openstack/ceilometer-0" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.565234 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9424be29-ccf4-449c-ad6a-dae1997dd5ab-log-httpd\") pod \"ceilometer-0\" (UID: \"9424be29-ccf4-449c-ad6a-dae1997dd5ab\") " pod="openstack/ceilometer-0" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.565302 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9424be29-ccf4-449c-ad6a-dae1997dd5ab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9424be29-ccf4-449c-ad6a-dae1997dd5ab\") " pod="openstack/ceilometer-0" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.565359 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtlpz\" (UniqueName: \"kubernetes.io/projected/9424be29-ccf4-449c-ad6a-dae1997dd5ab-kube-api-access-wtlpz\") pod \"ceilometer-0\" (UID: \"9424be29-ccf4-449c-ad6a-dae1997dd5ab\") " pod="openstack/ceilometer-0" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.565772 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9424be29-ccf4-449c-ad6a-dae1997dd5ab-log-httpd\") pod \"ceilometer-0\" (UID: \"9424be29-ccf4-449c-ad6a-dae1997dd5ab\") " pod="openstack/ceilometer-0" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.565941 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9424be29-ccf4-449c-ad6a-dae1997dd5ab-run-httpd\") pod \"ceilometer-0\" (UID: \"9424be29-ccf4-449c-ad6a-dae1997dd5ab\") " pod="openstack/ceilometer-0" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.566493 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9424be29-ccf4-449c-ad6a-dae1997dd5ab-config-data\") pod \"ceilometer-0\" (UID: \"9424be29-ccf4-449c-ad6a-dae1997dd5ab\") " pod="openstack/ceilometer-0" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.577000 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9424be29-ccf4-449c-ad6a-dae1997dd5ab-scripts\") pod \"ceilometer-0\" (UID: \"9424be29-ccf4-449c-ad6a-dae1997dd5ab\") " pod="openstack/ceilometer-0" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.581356 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9424be29-ccf4-449c-ad6a-dae1997dd5ab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9424be29-ccf4-449c-ad6a-dae1997dd5ab\") " pod="openstack/ceilometer-0" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.584621 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9424be29-ccf4-449c-ad6a-dae1997dd5ab-config-data\") pod \"ceilometer-0\" (UID: \"9424be29-ccf4-449c-ad6a-dae1997dd5ab\") " pod="openstack/ceilometer-0" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.586088 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9424be29-ccf4-449c-ad6a-dae1997dd5ab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9424be29-ccf4-449c-ad6a-dae1997dd5ab\") " pod="openstack/ceilometer-0" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.588238 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtlpz\" (UniqueName: \"kubernetes.io/projected/9424be29-ccf4-449c-ad6a-dae1997dd5ab-kube-api-access-wtlpz\") pod \"ceilometer-0\" (UID: \"9424be29-ccf4-449c-ad6a-dae1997dd5ab\") " pod="openstack/ceilometer-0" Jan 30 07:01:35 crc kubenswrapper[4520]: I0130 07:01:35.707485 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 07:01:36 crc kubenswrapper[4520]: I0130 07:01:36.708972 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b1a05da-505e-4ad3-8aba-596235eba06c" path="/var/lib/kubelet/pods/4b1a05da-505e-4ad3-8aba-596235eba06c/volumes" Jan 30 07:01:37 crc kubenswrapper[4520]: I0130 07:01:37.677951 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 07:01:41 crc kubenswrapper[4520]: I0130 07:01:41.515801 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7fc4c495dc-4wmrl" Jan 30 07:01:41 crc kubenswrapper[4520]: I0130 07:01:41.519213 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7fc4c495dc-4wmrl" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.226904 4520 scope.go:117] "RemoveContainer" containerID="6c7a741445971bf34cb83b6e6992059398c897d50c82940a95b1604b01e63d14" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.379226 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-d2pnr" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.379663 4520 scope.go:117] "RemoveContainer" containerID="478c1e8e5bfe174ffbe6a8456374f8d97d5a9f6ab2106e33fd9d45af1c2c134f" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.387366 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f0fc-account-create-update-r729j" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.458600 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8dnb5" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.458825 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-328b-account-create-update-gh8sv" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.458893 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kbm45" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.460953 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-328b-account-create-update-gh8sv" event={"ID":"161e12d5-a5ef-44fd-ae2c-7b76c39202eb","Type":"ContainerDied","Data":"9d2935c261e903c53ad7d1931acf79a765cdb53909ac1aa50d5b671f1c244625"} Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.460989 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d2935c261e903c53ad7d1931acf79a765cdb53909ac1aa50d5b671f1c244625" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.463395 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-006f-account-create-update-xbt7p" event={"ID":"b22ec224-5158-47c9-bb3e-70a93147f671","Type":"ContainerDied","Data":"138cf1678322a3c17b1b1c29b8b0a94da063183db84d7fee8659cac9c428b8c4"} Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.463428 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="138cf1678322a3c17b1b1c29b8b0a94da063183db84d7fee8659cac9c428b8c4" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.468405 4520 scope.go:117] "RemoveContainer" containerID="22745bd991aa16dc658bda284f170f8341b43f623a562ff5c3a49e31c372ad4a" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.478712 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-d2pnr" event={"ID":"efc58f00-bc50-41a3-ae74-2ed020c0ac1a","Type":"ContainerDied","Data":"4d1903c446fe875ca9465b1d520253c12a09ea41f7fc25e181e35871bd831601"} Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.478762 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d1903c446fe875ca9465b1d520253c12a09ea41f7fc25e181e35871bd831601" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.478839 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-d2pnr" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.485860 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8dnb5" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.486492 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8dnb5" event={"ID":"495b2abf-5190-409c-85b0-e6dcbef9ceaf","Type":"ContainerDied","Data":"bd4fbed02727367f64e00acbf302a2f16ca73433bec3963861eb3280ffdbbaaf"} Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.486554 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd4fbed02727367f64e00acbf302a2f16ca73433bec3963861eb3280ffdbbaaf" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.494729 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f0fc-account-create-update-r729j" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.495317 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f0fc-account-create-update-r729j" event={"ID":"4fa64730-e3ba-46af-8849-f4f38170b71a","Type":"ContainerDied","Data":"2967f78daa5e50a1f8bd22da58e4ca03e644a0587de24a1393a3eec8ed39a876"} Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.495354 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2967f78daa5e50a1f8bd22da58e4ca03e644a0587de24a1393a3eec8ed39a876" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.513424 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kbm45" event={"ID":"dac5b11b-fec5-471b-8b93-5755e9adbf6a","Type":"ContainerDied","Data":"cd562a670606b26c7b8f8e5f6f745115aa2c7e3fe743a0ebd86f1ec034037538"} Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.513599 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd562a670606b26c7b8f8e5f6f745115aa2c7e3fe743a0ebd86f1ec034037538" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.513798 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kbm45" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.540041 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-006f-account-create-update-xbt7p" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.570199 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tqnc\" (UniqueName: \"kubernetes.io/projected/dac5b11b-fec5-471b-8b93-5755e9adbf6a-kube-api-access-9tqnc\") pod \"dac5b11b-fec5-471b-8b93-5755e9adbf6a\" (UID: \"dac5b11b-fec5-471b-8b93-5755e9adbf6a\") " Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.570800 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efc58f00-bc50-41a3-ae74-2ed020c0ac1a-operator-scripts\") pod \"efc58f00-bc50-41a3-ae74-2ed020c0ac1a\" (UID: \"efc58f00-bc50-41a3-ae74-2ed020c0ac1a\") " Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.570831 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dac5b11b-fec5-471b-8b93-5755e9adbf6a-operator-scripts\") pod \"dac5b11b-fec5-471b-8b93-5755e9adbf6a\" (UID: \"dac5b11b-fec5-471b-8b93-5755e9adbf6a\") " Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.570870 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xnsz\" (UniqueName: \"kubernetes.io/projected/4fa64730-e3ba-46af-8849-f4f38170b71a-kube-api-access-6xnsz\") pod \"4fa64730-e3ba-46af-8849-f4f38170b71a\" (UID: \"4fa64730-e3ba-46af-8849-f4f38170b71a\") " Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.571015 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrv2l\" (UniqueName: \"kubernetes.io/projected/495b2abf-5190-409c-85b0-e6dcbef9ceaf-kube-api-access-hrv2l\") pod \"495b2abf-5190-409c-85b0-e6dcbef9ceaf\" (UID: \"495b2abf-5190-409c-85b0-e6dcbef9ceaf\") " Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.571047 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/495b2abf-5190-409c-85b0-e6dcbef9ceaf-operator-scripts\") pod \"495b2abf-5190-409c-85b0-e6dcbef9ceaf\" (UID: \"495b2abf-5190-409c-85b0-e6dcbef9ceaf\") " Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.571064 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fa64730-e3ba-46af-8849-f4f38170b71a-operator-scripts\") pod \"4fa64730-e3ba-46af-8849-f4f38170b71a\" (UID: \"4fa64730-e3ba-46af-8849-f4f38170b71a\") " Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.571107 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/161e12d5-a5ef-44fd-ae2c-7b76c39202eb-operator-scripts\") pod \"161e12d5-a5ef-44fd-ae2c-7b76c39202eb\" (UID: \"161e12d5-a5ef-44fd-ae2c-7b76c39202eb\") " Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.571133 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd7bl\" (UniqueName: \"kubernetes.io/projected/161e12d5-a5ef-44fd-ae2c-7b76c39202eb-kube-api-access-gd7bl\") pod \"161e12d5-a5ef-44fd-ae2c-7b76c39202eb\" (UID: \"161e12d5-a5ef-44fd-ae2c-7b76c39202eb\") " Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.571221 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9h7vf\" (UniqueName: \"kubernetes.io/projected/efc58f00-bc50-41a3-ae74-2ed020c0ac1a-kube-api-access-9h7vf\") pod \"efc58f00-bc50-41a3-ae74-2ed020c0ac1a\" (UID: \"efc58f00-bc50-41a3-ae74-2ed020c0ac1a\") " Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.571297 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz6r8\" (UniqueName: \"kubernetes.io/projected/b22ec224-5158-47c9-bb3e-70a93147f671-kube-api-access-bz6r8\") pod \"b22ec224-5158-47c9-bb3e-70a93147f671\" (UID: \"b22ec224-5158-47c9-bb3e-70a93147f671\") " Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.572605 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dac5b11b-fec5-471b-8b93-5755e9adbf6a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dac5b11b-fec5-471b-8b93-5755e9adbf6a" (UID: "dac5b11b-fec5-471b-8b93-5755e9adbf6a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.573089 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/161e12d5-a5ef-44fd-ae2c-7b76c39202eb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "161e12d5-a5ef-44fd-ae2c-7b76c39202eb" (UID: "161e12d5-a5ef-44fd-ae2c-7b76c39202eb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.573171 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efc58f00-bc50-41a3-ae74-2ed020c0ac1a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "efc58f00-bc50-41a3-ae74-2ed020c0ac1a" (UID: "efc58f00-bc50-41a3-ae74-2ed020c0ac1a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.573390 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/495b2abf-5190-409c-85b0-e6dcbef9ceaf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "495b2abf-5190-409c-85b0-e6dcbef9ceaf" (UID: "495b2abf-5190-409c-85b0-e6dcbef9ceaf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.576564 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/161e12d5-a5ef-44fd-ae2c-7b76c39202eb-kube-api-access-gd7bl" (OuterVolumeSpecName: "kube-api-access-gd7bl") pod "161e12d5-a5ef-44fd-ae2c-7b76c39202eb" (UID: "161e12d5-a5ef-44fd-ae2c-7b76c39202eb"). InnerVolumeSpecName "kube-api-access-gd7bl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.577589 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fa64730-e3ba-46af-8849-f4f38170b71a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4fa64730-e3ba-46af-8849-f4f38170b71a" (UID: "4fa64730-e3ba-46af-8849-f4f38170b71a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.578255 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fa64730-e3ba-46af-8849-f4f38170b71a-kube-api-access-6xnsz" (OuterVolumeSpecName: "kube-api-access-6xnsz") pod "4fa64730-e3ba-46af-8849-f4f38170b71a" (UID: "4fa64730-e3ba-46af-8849-f4f38170b71a"). InnerVolumeSpecName "kube-api-access-6xnsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.579873 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dac5b11b-fec5-471b-8b93-5755e9adbf6a-kube-api-access-9tqnc" (OuterVolumeSpecName: "kube-api-access-9tqnc") pod "dac5b11b-fec5-471b-8b93-5755e9adbf6a" (UID: "dac5b11b-fec5-471b-8b93-5755e9adbf6a"). InnerVolumeSpecName "kube-api-access-9tqnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.580461 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/495b2abf-5190-409c-85b0-e6dcbef9ceaf-kube-api-access-hrv2l" (OuterVolumeSpecName: "kube-api-access-hrv2l") pod "495b2abf-5190-409c-85b0-e6dcbef9ceaf" (UID: "495b2abf-5190-409c-85b0-e6dcbef9ceaf"). InnerVolumeSpecName "kube-api-access-hrv2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.585620 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b22ec224-5158-47c9-bb3e-70a93147f671-kube-api-access-bz6r8" (OuterVolumeSpecName: "kube-api-access-bz6r8") pod "b22ec224-5158-47c9-bb3e-70a93147f671" (UID: "b22ec224-5158-47c9-bb3e-70a93147f671"). InnerVolumeSpecName "kube-api-access-bz6r8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.585670 4520 scope.go:117] "RemoveContainer" containerID="0fe1e97f4ca7fc31f5df6ca0c088afa106853f66614658702622686d0ec052a8" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.585918 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efc58f00-bc50-41a3-ae74-2ed020c0ac1a-kube-api-access-9h7vf" (OuterVolumeSpecName: "kube-api-access-9h7vf") pod "efc58f00-bc50-41a3-ae74-2ed020c0ac1a" (UID: "efc58f00-bc50-41a3-ae74-2ed020c0ac1a"). InnerVolumeSpecName "kube-api-access-9h7vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:01:42 crc kubenswrapper[4520]: E0130 07:01:42.586101 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fe1e97f4ca7fc31f5df6ca0c088afa106853f66614658702622686d0ec052a8\": container with ID starting with 0fe1e97f4ca7fc31f5df6ca0c088afa106853f66614658702622686d0ec052a8 not found: ID does not exist" containerID="0fe1e97f4ca7fc31f5df6ca0c088afa106853f66614658702622686d0ec052a8" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.586139 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fe1e97f4ca7fc31f5df6ca0c088afa106853f66614658702622686d0ec052a8"} err="failed to get container status \"0fe1e97f4ca7fc31f5df6ca0c088afa106853f66614658702622686d0ec052a8\": rpc error: code = NotFound desc = could not find container \"0fe1e97f4ca7fc31f5df6ca0c088afa106853f66614658702622686d0ec052a8\": container with ID starting with 0fe1e97f4ca7fc31f5df6ca0c088afa106853f66614658702622686d0ec052a8 not found: ID does not exist" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.586162 4520 scope.go:117] "RemoveContainer" containerID="6c7a741445971bf34cb83b6e6992059398c897d50c82940a95b1604b01e63d14" Jan 30 07:01:42 crc kubenswrapper[4520]: E0130 07:01:42.586355 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c7a741445971bf34cb83b6e6992059398c897d50c82940a95b1604b01e63d14\": container with ID starting with 6c7a741445971bf34cb83b6e6992059398c897d50c82940a95b1604b01e63d14 not found: ID does not exist" containerID="6c7a741445971bf34cb83b6e6992059398c897d50c82940a95b1604b01e63d14" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.586373 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c7a741445971bf34cb83b6e6992059398c897d50c82940a95b1604b01e63d14"} err="failed to get container status \"6c7a741445971bf34cb83b6e6992059398c897d50c82940a95b1604b01e63d14\": rpc error: code = NotFound desc = could not find container \"6c7a741445971bf34cb83b6e6992059398c897d50c82940a95b1604b01e63d14\": container with ID starting with 6c7a741445971bf34cb83b6e6992059398c897d50c82940a95b1604b01e63d14 not found: ID does not exist" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.586386 4520 scope.go:117] "RemoveContainer" containerID="478c1e8e5bfe174ffbe6a8456374f8d97d5a9f6ab2106e33fd9d45af1c2c134f" Jan 30 07:01:42 crc kubenswrapper[4520]: E0130 07:01:42.586889 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"478c1e8e5bfe174ffbe6a8456374f8d97d5a9f6ab2106e33fd9d45af1c2c134f\": container with ID starting with 478c1e8e5bfe174ffbe6a8456374f8d97d5a9f6ab2106e33fd9d45af1c2c134f not found: ID does not exist" containerID="478c1e8e5bfe174ffbe6a8456374f8d97d5a9f6ab2106e33fd9d45af1c2c134f" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.586911 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"478c1e8e5bfe174ffbe6a8456374f8d97d5a9f6ab2106e33fd9d45af1c2c134f"} err="failed to get container status \"478c1e8e5bfe174ffbe6a8456374f8d97d5a9f6ab2106e33fd9d45af1c2c134f\": rpc error: code = NotFound desc = could not find container \"478c1e8e5bfe174ffbe6a8456374f8d97d5a9f6ab2106e33fd9d45af1c2c134f\": container with ID starting with 478c1e8e5bfe174ffbe6a8456374f8d97d5a9f6ab2106e33fd9d45af1c2c134f not found: ID does not exist" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.586928 4520 scope.go:117] "RemoveContainer" containerID="22745bd991aa16dc658bda284f170f8341b43f623a562ff5c3a49e31c372ad4a" Jan 30 07:01:42 crc kubenswrapper[4520]: E0130 07:01:42.587110 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22745bd991aa16dc658bda284f170f8341b43f623a562ff5c3a49e31c372ad4a\": container with ID starting with 22745bd991aa16dc658bda284f170f8341b43f623a562ff5c3a49e31c372ad4a not found: ID does not exist" containerID="22745bd991aa16dc658bda284f170f8341b43f623a562ff5c3a49e31c372ad4a" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.587129 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22745bd991aa16dc658bda284f170f8341b43f623a562ff5c3a49e31c372ad4a"} err="failed to get container status \"22745bd991aa16dc658bda284f170f8341b43f623a562ff5c3a49e31c372ad4a\": rpc error: code = NotFound desc = could not find container \"22745bd991aa16dc658bda284f170f8341b43f623a562ff5c3a49e31c372ad4a\": container with ID starting with 22745bd991aa16dc658bda284f170f8341b43f623a562ff5c3a49e31c372ad4a not found: ID does not exist" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.674350 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b22ec224-5158-47c9-bb3e-70a93147f671-operator-scripts\") pod \"b22ec224-5158-47c9-bb3e-70a93147f671\" (UID: \"b22ec224-5158-47c9-bb3e-70a93147f671\") " Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.677690 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b22ec224-5158-47c9-bb3e-70a93147f671-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b22ec224-5158-47c9-bb3e-70a93147f671" (UID: "b22ec224-5158-47c9-bb3e-70a93147f671"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.679554 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tqnc\" (UniqueName: \"kubernetes.io/projected/dac5b11b-fec5-471b-8b93-5755e9adbf6a-kube-api-access-9tqnc\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.679588 4520 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efc58f00-bc50-41a3-ae74-2ed020c0ac1a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.679598 4520 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dac5b11b-fec5-471b-8b93-5755e9adbf6a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.679609 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xnsz\" (UniqueName: \"kubernetes.io/projected/4fa64730-e3ba-46af-8849-f4f38170b71a-kube-api-access-6xnsz\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.679622 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrv2l\" (UniqueName: \"kubernetes.io/projected/495b2abf-5190-409c-85b0-e6dcbef9ceaf-kube-api-access-hrv2l\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.679631 4520 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/495b2abf-5190-409c-85b0-e6dcbef9ceaf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.679649 4520 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fa64730-e3ba-46af-8849-f4f38170b71a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.679658 4520 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/161e12d5-a5ef-44fd-ae2c-7b76c39202eb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.679667 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd7bl\" (UniqueName: \"kubernetes.io/projected/161e12d5-a5ef-44fd-ae2c-7b76c39202eb-kube-api-access-gd7bl\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.679676 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9h7vf\" (UniqueName: \"kubernetes.io/projected/efc58f00-bc50-41a3-ae74-2ed020c0ac1a-kube-api-access-9h7vf\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.679686 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bz6r8\" (UniqueName: \"kubernetes.io/projected/b22ec224-5158-47c9-bb3e-70a93147f671-kube-api-access-bz6r8\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.780420 4520 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b22ec224-5158-47c9-bb3e-70a93147f671-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.848848 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:01:42 crc kubenswrapper[4520]: W0130 07:01:42.849631 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9424be29_ccf4_449c_ad6a_dae1997dd5ab.slice/crio-b0159c197f61740cf184c06bdfb93a99bd05d2be5f65f9ef8ead7f6ad0961484 WatchSource:0}: Error finding container b0159c197f61740cf184c06bdfb93a99bd05d2be5f65f9ef8ead7f6ad0961484: Status 404 returned error can't find the container with id b0159c197f61740cf184c06bdfb93a99bd05d2be5f65f9ef8ead7f6ad0961484 Jan 30 07:01:42 crc kubenswrapper[4520]: I0130 07:01:42.957280 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7hthh"] Jan 30 07:01:42 crc kubenswrapper[4520]: W0130 07:01:42.957550 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod843b1d9d_26f2_42d5_b8ff_331b66efd5f8.slice/crio-25ded74a5d949d5cfe5b4f60b555c7d97289d4d9d42de847af93046f31593e74 WatchSource:0}: Error finding container 25ded74a5d949d5cfe5b4f60b555c7d97289d4d9d42de847af93046f31593e74: Status 404 returned error can't find the container with id 25ded74a5d949d5cfe5b4f60b555c7d97289d4d9d42de847af93046f31593e74 Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.545036 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"adfdf2da-e6a3-4282-accf-c847645aa0fc","Type":"ContainerStarted","Data":"0a7fb682d3d01ca88d4553a6c55445bef889ca058ff8bbf4d5edf70d9ee9bf63"} Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.548153 4520 generic.go:334] "Generic (PLEG): container finished" podID="843b1d9d-26f2-42d5-b8ff-331b66efd5f8" containerID="021bdb919594cc9f63bef45c8d76edf9fda2c438fec7d13a8e7a279ba692f73d" exitCode=0 Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.549996 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7hthh" event={"ID":"843b1d9d-26f2-42d5-b8ff-331b66efd5f8","Type":"ContainerDied","Data":"021bdb919594cc9f63bef45c8d76edf9fda2c438fec7d13a8e7a279ba692f73d"} Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.562100 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7hthh" event={"ID":"843b1d9d-26f2-42d5-b8ff-331b66efd5f8","Type":"ContainerStarted","Data":"25ded74a5d949d5cfe5b4f60b555c7d97289d4d9d42de847af93046f31593e74"} Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.569884 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-006f-account-create-update-xbt7p" Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.570985 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9424be29-ccf4-449c-ad6a-dae1997dd5ab","Type":"ContainerStarted","Data":"b0159c197f61740cf184c06bdfb93a99bd05d2be5f65f9ef8ead7f6ad0961484"} Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.571052 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-328b-account-create-update-gh8sv" Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.610766 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.215306043 podStartE2EDuration="20.610740227s" podCreationTimestamp="2026-01-30 07:01:23 +0000 UTC" firstStartedPulling="2026-01-30 07:01:24.893954492 +0000 UTC m=+998.522306674" lastFinishedPulling="2026-01-30 07:01:42.289388677 +0000 UTC m=+1015.917740858" observedRunningTime="2026-01-30 07:01:43.57375915 +0000 UTC m=+1017.202111321" watchObservedRunningTime="2026-01-30 07:01:43.610740227 +0000 UTC m=+1017.239092408" Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.793415 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6b7486bc6d-lhplk"] Jan 30 07:01:43 crc kubenswrapper[4520]: E0130 07:01:43.794436 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efc58f00-bc50-41a3-ae74-2ed020c0ac1a" containerName="mariadb-database-create" Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.794461 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="efc58f00-bc50-41a3-ae74-2ed020c0ac1a" containerName="mariadb-database-create" Jan 30 07:01:43 crc kubenswrapper[4520]: E0130 07:01:43.794483 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dac5b11b-fec5-471b-8b93-5755e9adbf6a" containerName="mariadb-database-create" Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.794492 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="dac5b11b-fec5-471b-8b93-5755e9adbf6a" containerName="mariadb-database-create" Jan 30 07:01:43 crc kubenswrapper[4520]: E0130 07:01:43.794532 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="495b2abf-5190-409c-85b0-e6dcbef9ceaf" containerName="mariadb-database-create" Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.794540 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="495b2abf-5190-409c-85b0-e6dcbef9ceaf" containerName="mariadb-database-create" Jan 30 07:01:43 crc kubenswrapper[4520]: E0130 07:01:43.794563 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b22ec224-5158-47c9-bb3e-70a93147f671" containerName="mariadb-account-create-update" Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.794569 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="b22ec224-5158-47c9-bb3e-70a93147f671" containerName="mariadb-account-create-update" Jan 30 07:01:43 crc kubenswrapper[4520]: E0130 07:01:43.794583 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="161e12d5-a5ef-44fd-ae2c-7b76c39202eb" containerName="mariadb-account-create-update" Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.794592 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="161e12d5-a5ef-44fd-ae2c-7b76c39202eb" containerName="mariadb-account-create-update" Jan 30 07:01:43 crc kubenswrapper[4520]: E0130 07:01:43.794607 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa64730-e3ba-46af-8849-f4f38170b71a" containerName="mariadb-account-create-update" Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.794615 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa64730-e3ba-46af-8849-f4f38170b71a" containerName="mariadb-account-create-update" Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.794856 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fa64730-e3ba-46af-8849-f4f38170b71a" containerName="mariadb-account-create-update" Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.794878 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="495b2abf-5190-409c-85b0-e6dcbef9ceaf" containerName="mariadb-database-create" Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.794886 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="161e12d5-a5ef-44fd-ae2c-7b76c39202eb" containerName="mariadb-account-create-update" Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.794895 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="efc58f00-bc50-41a3-ae74-2ed020c0ac1a" containerName="mariadb-database-create" Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.794910 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="b22ec224-5158-47c9-bb3e-70a93147f671" containerName="mariadb-account-create-update" Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.794925 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="dac5b11b-fec5-471b-8b93-5755e9adbf6a" containerName="mariadb-database-create" Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.798895 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6b7486bc6d-lhplk" Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.800934 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.801397 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-spr62" Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.801680 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.806265 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9nfl\" (UniqueName: \"kubernetes.io/projected/a58bb950-bc15-4ca5-9e01-49c1e92fdf24-kube-api-access-n9nfl\") pod \"heat-engine-6b7486bc6d-lhplk\" (UID: \"a58bb950-bc15-4ca5-9e01-49c1e92fdf24\") " pod="openstack/heat-engine-6b7486bc6d-lhplk" Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.806356 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a58bb950-bc15-4ca5-9e01-49c1e92fdf24-config-data-custom\") pod \"heat-engine-6b7486bc6d-lhplk\" (UID: \"a58bb950-bc15-4ca5-9e01-49c1e92fdf24\") " pod="openstack/heat-engine-6b7486bc6d-lhplk" Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.806407 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a58bb950-bc15-4ca5-9e01-49c1e92fdf24-combined-ca-bundle\") pod \"heat-engine-6b7486bc6d-lhplk\" (UID: \"a58bb950-bc15-4ca5-9e01-49c1e92fdf24\") " pod="openstack/heat-engine-6b7486bc6d-lhplk" Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.806469 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a58bb950-bc15-4ca5-9e01-49c1e92fdf24-config-data\") pod \"heat-engine-6b7486bc6d-lhplk\" (UID: \"a58bb950-bc15-4ca5-9e01-49c1e92fdf24\") " pod="openstack/heat-engine-6b7486bc6d-lhplk" Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.815632 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6b7486bc6d-lhplk"] Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.908743 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9nfl\" (UniqueName: \"kubernetes.io/projected/a58bb950-bc15-4ca5-9e01-49c1e92fdf24-kube-api-access-n9nfl\") pod \"heat-engine-6b7486bc6d-lhplk\" (UID: \"a58bb950-bc15-4ca5-9e01-49c1e92fdf24\") " pod="openstack/heat-engine-6b7486bc6d-lhplk" Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.908843 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a58bb950-bc15-4ca5-9e01-49c1e92fdf24-config-data-custom\") pod \"heat-engine-6b7486bc6d-lhplk\" (UID: \"a58bb950-bc15-4ca5-9e01-49c1e92fdf24\") " pod="openstack/heat-engine-6b7486bc6d-lhplk" Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.908893 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a58bb950-bc15-4ca5-9e01-49c1e92fdf24-combined-ca-bundle\") pod \"heat-engine-6b7486bc6d-lhplk\" (UID: \"a58bb950-bc15-4ca5-9e01-49c1e92fdf24\") " pod="openstack/heat-engine-6b7486bc6d-lhplk" Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.908938 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a58bb950-bc15-4ca5-9e01-49c1e92fdf24-config-data\") pod \"heat-engine-6b7486bc6d-lhplk\" (UID: \"a58bb950-bc15-4ca5-9e01-49c1e92fdf24\") " pod="openstack/heat-engine-6b7486bc6d-lhplk" Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.918273 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a58bb950-bc15-4ca5-9e01-49c1e92fdf24-config-data\") pod \"heat-engine-6b7486bc6d-lhplk\" (UID: \"a58bb950-bc15-4ca5-9e01-49c1e92fdf24\") " pod="openstack/heat-engine-6b7486bc6d-lhplk" Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.930681 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a58bb950-bc15-4ca5-9e01-49c1e92fdf24-combined-ca-bundle\") pod \"heat-engine-6b7486bc6d-lhplk\" (UID: \"a58bb950-bc15-4ca5-9e01-49c1e92fdf24\") " pod="openstack/heat-engine-6b7486bc6d-lhplk" Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.934818 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a58bb950-bc15-4ca5-9e01-49c1e92fdf24-config-data-custom\") pod \"heat-engine-6b7486bc6d-lhplk\" (UID: \"a58bb950-bc15-4ca5-9e01-49c1e92fdf24\") " pod="openstack/heat-engine-6b7486bc6d-lhplk" Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.940117 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9nfl\" (UniqueName: \"kubernetes.io/projected/a58bb950-bc15-4ca5-9e01-49c1e92fdf24-kube-api-access-n9nfl\") pod \"heat-engine-6b7486bc6d-lhplk\" (UID: \"a58bb950-bc15-4ca5-9e01-49c1e92fdf24\") " pod="openstack/heat-engine-6b7486bc6d-lhplk" Jan 30 07:01:43 crc kubenswrapper[4520]: I0130 07:01:43.998318 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65948bc6c-vwm6m"] Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.000587 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65948bc6c-vwm6m" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.044916 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7c4c8c7bb-pfwmd"] Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.046146 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7c4c8c7bb-pfwmd" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.049686 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.066559 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65948bc6c-vwm6m"] Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.083555 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7c4c8c7bb-pfwmd"] Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.116290 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6b7486bc6d-lhplk" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.118076 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-66664cb669-j765l"] Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.118607 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c99ef8b-2ef2-4e57-996c-d74afbaa161e-combined-ca-bundle\") pod \"heat-cfnapi-7c4c8c7bb-pfwmd\" (UID: \"2c99ef8b-2ef2-4e57-996c-d74afbaa161e\") " pod="openstack/heat-cfnapi-7c4c8c7bb-pfwmd" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.118759 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c99ef8b-2ef2-4e57-996c-d74afbaa161e-config-data-custom\") pod \"heat-cfnapi-7c4c8c7bb-pfwmd\" (UID: \"2c99ef8b-2ef2-4e57-996c-d74afbaa161e\") " pod="openstack/heat-cfnapi-7c4c8c7bb-pfwmd" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.118882 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c3b80d9-dfeb-4120-a523-4f4ceea700c8-ovsdbserver-nb\") pod \"dnsmasq-dns-65948bc6c-vwm6m\" (UID: \"2c3b80d9-dfeb-4120-a523-4f4ceea700c8\") " pod="openstack/dnsmasq-dns-65948bc6c-vwm6m" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.118962 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv4g7\" (UniqueName: \"kubernetes.io/projected/2c3b80d9-dfeb-4120-a523-4f4ceea700c8-kube-api-access-zv4g7\") pod \"dnsmasq-dns-65948bc6c-vwm6m\" (UID: \"2c3b80d9-dfeb-4120-a523-4f4ceea700c8\") " pod="openstack/dnsmasq-dns-65948bc6c-vwm6m" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.119029 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c3b80d9-dfeb-4120-a523-4f4ceea700c8-dns-svc\") pod \"dnsmasq-dns-65948bc6c-vwm6m\" (UID: \"2c3b80d9-dfeb-4120-a523-4f4ceea700c8\") " pod="openstack/dnsmasq-dns-65948bc6c-vwm6m" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.119113 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c99ef8b-2ef2-4e57-996c-d74afbaa161e-config-data\") pod \"heat-cfnapi-7c4c8c7bb-pfwmd\" (UID: \"2c99ef8b-2ef2-4e57-996c-d74afbaa161e\") " pod="openstack/heat-cfnapi-7c4c8c7bb-pfwmd" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.119202 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c3b80d9-dfeb-4120-a523-4f4ceea700c8-dns-swift-storage-0\") pod \"dnsmasq-dns-65948bc6c-vwm6m\" (UID: \"2c3b80d9-dfeb-4120-a523-4f4ceea700c8\") " pod="openstack/dnsmasq-dns-65948bc6c-vwm6m" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.119297 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c3b80d9-dfeb-4120-a523-4f4ceea700c8-ovsdbserver-sb\") pod \"dnsmasq-dns-65948bc6c-vwm6m\" (UID: \"2c3b80d9-dfeb-4120-a523-4f4ceea700c8\") " pod="openstack/dnsmasq-dns-65948bc6c-vwm6m" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.119371 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-66664cb669-j765l" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.119431 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tng5s\" (UniqueName: \"kubernetes.io/projected/2c99ef8b-2ef2-4e57-996c-d74afbaa161e-kube-api-access-tng5s\") pod \"heat-cfnapi-7c4c8c7bb-pfwmd\" (UID: \"2c99ef8b-2ef2-4e57-996c-d74afbaa161e\") " pod="openstack/heat-cfnapi-7c4c8c7bb-pfwmd" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.119503 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c3b80d9-dfeb-4120-a523-4f4ceea700c8-config\") pod \"dnsmasq-dns-65948bc6c-vwm6m\" (UID: \"2c3b80d9-dfeb-4120-a523-4f4ceea700c8\") " pod="openstack/dnsmasq-dns-65948bc6c-vwm6m" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.135818 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.169249 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-c459697cb-g922m" podUID="3380703e-5659-4040-8b43-e3ada0eaa6b6" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.195190 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-66664cb669-j765l"] Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.232182 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tng5s\" (UniqueName: \"kubernetes.io/projected/2c99ef8b-2ef2-4e57-996c-d74afbaa161e-kube-api-access-tng5s\") pod \"heat-cfnapi-7c4c8c7bb-pfwmd\" (UID: \"2c99ef8b-2ef2-4e57-996c-d74afbaa161e\") " pod="openstack/heat-cfnapi-7c4c8c7bb-pfwmd" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.232253 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c3b80d9-dfeb-4120-a523-4f4ceea700c8-config\") pod \"dnsmasq-dns-65948bc6c-vwm6m\" (UID: \"2c3b80d9-dfeb-4120-a523-4f4ceea700c8\") " pod="openstack/dnsmasq-dns-65948bc6c-vwm6m" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.232279 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c99ef8b-2ef2-4e57-996c-d74afbaa161e-combined-ca-bundle\") pod \"heat-cfnapi-7c4c8c7bb-pfwmd\" (UID: \"2c99ef8b-2ef2-4e57-996c-d74afbaa161e\") " pod="openstack/heat-cfnapi-7c4c8c7bb-pfwmd" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.232328 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c99ef8b-2ef2-4e57-996c-d74afbaa161e-config-data-custom\") pod \"heat-cfnapi-7c4c8c7bb-pfwmd\" (UID: \"2c99ef8b-2ef2-4e57-996c-d74afbaa161e\") " pod="openstack/heat-cfnapi-7c4c8c7bb-pfwmd" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.232358 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ddf81a9-672a-457d-a233-087d73b4890a-config-data-custom\") pod \"heat-api-66664cb669-j765l\" (UID: \"2ddf81a9-672a-457d-a233-087d73b4890a\") " pod="openstack/heat-api-66664cb669-j765l" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.232415 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ddf81a9-672a-457d-a233-087d73b4890a-combined-ca-bundle\") pod \"heat-api-66664cb669-j765l\" (UID: \"2ddf81a9-672a-457d-a233-087d73b4890a\") " pod="openstack/heat-api-66664cb669-j765l" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.232452 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c3b80d9-dfeb-4120-a523-4f4ceea700c8-ovsdbserver-nb\") pod \"dnsmasq-dns-65948bc6c-vwm6m\" (UID: \"2c3b80d9-dfeb-4120-a523-4f4ceea700c8\") " pod="openstack/dnsmasq-dns-65948bc6c-vwm6m" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.232477 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv4g7\" (UniqueName: \"kubernetes.io/projected/2c3b80d9-dfeb-4120-a523-4f4ceea700c8-kube-api-access-zv4g7\") pod \"dnsmasq-dns-65948bc6c-vwm6m\" (UID: \"2c3b80d9-dfeb-4120-a523-4f4ceea700c8\") " pod="openstack/dnsmasq-dns-65948bc6c-vwm6m" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.232497 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c3b80d9-dfeb-4120-a523-4f4ceea700c8-dns-svc\") pod \"dnsmasq-dns-65948bc6c-vwm6m\" (UID: \"2c3b80d9-dfeb-4120-a523-4f4ceea700c8\") " pod="openstack/dnsmasq-dns-65948bc6c-vwm6m" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.232527 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p67lf\" (UniqueName: \"kubernetes.io/projected/2ddf81a9-672a-457d-a233-087d73b4890a-kube-api-access-p67lf\") pod \"heat-api-66664cb669-j765l\" (UID: \"2ddf81a9-672a-457d-a233-087d73b4890a\") " pod="openstack/heat-api-66664cb669-j765l" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.232567 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c99ef8b-2ef2-4e57-996c-d74afbaa161e-config-data\") pod \"heat-cfnapi-7c4c8c7bb-pfwmd\" (UID: \"2c99ef8b-2ef2-4e57-996c-d74afbaa161e\") " pod="openstack/heat-cfnapi-7c4c8c7bb-pfwmd" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.232614 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c3b80d9-dfeb-4120-a523-4f4ceea700c8-dns-swift-storage-0\") pod \"dnsmasq-dns-65948bc6c-vwm6m\" (UID: \"2c3b80d9-dfeb-4120-a523-4f4ceea700c8\") " pod="openstack/dnsmasq-dns-65948bc6c-vwm6m" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.232653 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ddf81a9-672a-457d-a233-087d73b4890a-config-data\") pod \"heat-api-66664cb669-j765l\" (UID: \"2ddf81a9-672a-457d-a233-087d73b4890a\") " pod="openstack/heat-api-66664cb669-j765l" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.232694 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c3b80d9-dfeb-4120-a523-4f4ceea700c8-ovsdbserver-sb\") pod \"dnsmasq-dns-65948bc6c-vwm6m\" (UID: \"2c3b80d9-dfeb-4120-a523-4f4ceea700c8\") " pod="openstack/dnsmasq-dns-65948bc6c-vwm6m" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.233471 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c3b80d9-dfeb-4120-a523-4f4ceea700c8-ovsdbserver-sb\") pod \"dnsmasq-dns-65948bc6c-vwm6m\" (UID: \"2c3b80d9-dfeb-4120-a523-4f4ceea700c8\") " pod="openstack/dnsmasq-dns-65948bc6c-vwm6m" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.233822 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c3b80d9-dfeb-4120-a523-4f4ceea700c8-ovsdbserver-nb\") pod \"dnsmasq-dns-65948bc6c-vwm6m\" (UID: \"2c3b80d9-dfeb-4120-a523-4f4ceea700c8\") " pod="openstack/dnsmasq-dns-65948bc6c-vwm6m" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.237336 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c3b80d9-dfeb-4120-a523-4f4ceea700c8-dns-svc\") pod \"dnsmasq-dns-65948bc6c-vwm6m\" (UID: \"2c3b80d9-dfeb-4120-a523-4f4ceea700c8\") " pod="openstack/dnsmasq-dns-65948bc6c-vwm6m" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.242279 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c3b80d9-dfeb-4120-a523-4f4ceea700c8-dns-swift-storage-0\") pod \"dnsmasq-dns-65948bc6c-vwm6m\" (UID: \"2c3b80d9-dfeb-4120-a523-4f4ceea700c8\") " pod="openstack/dnsmasq-dns-65948bc6c-vwm6m" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.242566 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c3b80d9-dfeb-4120-a523-4f4ceea700c8-config\") pod \"dnsmasq-dns-65948bc6c-vwm6m\" (UID: \"2c3b80d9-dfeb-4120-a523-4f4ceea700c8\") " pod="openstack/dnsmasq-dns-65948bc6c-vwm6m" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.245080 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c99ef8b-2ef2-4e57-996c-d74afbaa161e-config-data-custom\") pod \"heat-cfnapi-7c4c8c7bb-pfwmd\" (UID: \"2c99ef8b-2ef2-4e57-996c-d74afbaa161e\") " pod="openstack/heat-cfnapi-7c4c8c7bb-pfwmd" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.254476 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c99ef8b-2ef2-4e57-996c-d74afbaa161e-combined-ca-bundle\") pod \"heat-cfnapi-7c4c8c7bb-pfwmd\" (UID: \"2c99ef8b-2ef2-4e57-996c-d74afbaa161e\") " pod="openstack/heat-cfnapi-7c4c8c7bb-pfwmd" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.259949 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c99ef8b-2ef2-4e57-996c-d74afbaa161e-config-data\") pod \"heat-cfnapi-7c4c8c7bb-pfwmd\" (UID: \"2c99ef8b-2ef2-4e57-996c-d74afbaa161e\") " pod="openstack/heat-cfnapi-7c4c8c7bb-pfwmd" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.269021 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tng5s\" (UniqueName: \"kubernetes.io/projected/2c99ef8b-2ef2-4e57-996c-d74afbaa161e-kube-api-access-tng5s\") pod \"heat-cfnapi-7c4c8c7bb-pfwmd\" (UID: \"2c99ef8b-2ef2-4e57-996c-d74afbaa161e\") " pod="openstack/heat-cfnapi-7c4c8c7bb-pfwmd" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.274158 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv4g7\" (UniqueName: \"kubernetes.io/projected/2c3b80d9-dfeb-4120-a523-4f4ceea700c8-kube-api-access-zv4g7\") pod \"dnsmasq-dns-65948bc6c-vwm6m\" (UID: \"2c3b80d9-dfeb-4120-a523-4f4ceea700c8\") " pod="openstack/dnsmasq-dns-65948bc6c-vwm6m" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.335925 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ddf81a9-672a-457d-a233-087d73b4890a-config-data-custom\") pod \"heat-api-66664cb669-j765l\" (UID: \"2ddf81a9-672a-457d-a233-087d73b4890a\") " pod="openstack/heat-api-66664cb669-j765l" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.335999 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ddf81a9-672a-457d-a233-087d73b4890a-combined-ca-bundle\") pod \"heat-api-66664cb669-j765l\" (UID: \"2ddf81a9-672a-457d-a233-087d73b4890a\") " pod="openstack/heat-api-66664cb669-j765l" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.336035 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p67lf\" (UniqueName: \"kubernetes.io/projected/2ddf81a9-672a-457d-a233-087d73b4890a-kube-api-access-p67lf\") pod \"heat-api-66664cb669-j765l\" (UID: \"2ddf81a9-672a-457d-a233-087d73b4890a\") " pod="openstack/heat-api-66664cb669-j765l" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.336093 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ddf81a9-672a-457d-a233-087d73b4890a-config-data\") pod \"heat-api-66664cb669-j765l\" (UID: \"2ddf81a9-672a-457d-a233-087d73b4890a\") " pod="openstack/heat-api-66664cb669-j765l" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.341597 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65948bc6c-vwm6m" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.344624 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ddf81a9-672a-457d-a233-087d73b4890a-combined-ca-bundle\") pod \"heat-api-66664cb669-j765l\" (UID: \"2ddf81a9-672a-457d-a233-087d73b4890a\") " pod="openstack/heat-api-66664cb669-j765l" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.348237 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ddf81a9-672a-457d-a233-087d73b4890a-config-data-custom\") pod \"heat-api-66664cb669-j765l\" (UID: \"2ddf81a9-672a-457d-a233-087d73b4890a\") " pod="openstack/heat-api-66664cb669-j765l" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.364160 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ddf81a9-672a-457d-a233-087d73b4890a-config-data\") pod \"heat-api-66664cb669-j765l\" (UID: \"2ddf81a9-672a-457d-a233-087d73b4890a\") " pod="openstack/heat-api-66664cb669-j765l" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.367176 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p67lf\" (UniqueName: \"kubernetes.io/projected/2ddf81a9-672a-457d-a233-087d73b4890a-kube-api-access-p67lf\") pod \"heat-api-66664cb669-j765l\" (UID: \"2ddf81a9-672a-457d-a233-087d73b4890a\") " pod="openstack/heat-api-66664cb669-j765l" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.380770 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7c4c8c7bb-pfwmd" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.471086 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-66664cb669-j765l" Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.624135 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9424be29-ccf4-449c-ad6a-dae1997dd5ab","Type":"ContainerStarted","Data":"a4ba56155730aa47005206621dbdbf22dc6eff2744b886b1ddf481ef64c91099"} Jan 30 07:01:44 crc kubenswrapper[4520]: I0130 07:01:44.765285 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6b7486bc6d-lhplk"] Jan 30 07:01:45 crc kubenswrapper[4520]: I0130 07:01:45.011157 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7c4c8c7bb-pfwmd"] Jan 30 07:01:45 crc kubenswrapper[4520]: W0130 07:01:45.011730 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c99ef8b_2ef2_4e57_996c_d74afbaa161e.slice/crio-c9fb7ee3fa91ea11b57d39df2f3d728a96994401b7f2537408860e9070bb7589 WatchSource:0}: Error finding container c9fb7ee3fa91ea11b57d39df2f3d728a96994401b7f2537408860e9070bb7589: Status 404 returned error can't find the container with id c9fb7ee3fa91ea11b57d39df2f3d728a96994401b7f2537408860e9070bb7589 Jan 30 07:01:45 crc kubenswrapper[4520]: I0130 07:01:45.033681 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65948bc6c-vwm6m"] Jan 30 07:01:45 crc kubenswrapper[4520]: I0130 07:01:45.145776 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-66664cb669-j765l"] Jan 30 07:01:45 crc kubenswrapper[4520]: I0130 07:01:45.634986 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6b7486bc6d-lhplk" event={"ID":"a58bb950-bc15-4ca5-9e01-49c1e92fdf24","Type":"ContainerStarted","Data":"26bd332855247aba63d2b87dfac793ecd4ff5bfa351dcd90bc794e8505cbc0fb"} Jan 30 07:01:45 crc kubenswrapper[4520]: I0130 07:01:45.635304 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6b7486bc6d-lhplk" event={"ID":"a58bb950-bc15-4ca5-9e01-49c1e92fdf24","Type":"ContainerStarted","Data":"8e974300619cb153bc74e091a1d969f6eda605f3feb8e5115b469d944a125c0b"} Jan 30 07:01:45 crc kubenswrapper[4520]: I0130 07:01:45.635323 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6b7486bc6d-lhplk" Jan 30 07:01:45 crc kubenswrapper[4520]: I0130 07:01:45.638047 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7hthh" event={"ID":"843b1d9d-26f2-42d5-b8ff-331b66efd5f8","Type":"ContainerStarted","Data":"7a924c89e2620139928431b96049e7da9bfa56fc19180750c7791ac6c14e31e9"} Jan 30 07:01:45 crc kubenswrapper[4520]: I0130 07:01:45.640151 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9424be29-ccf4-449c-ad6a-dae1997dd5ab","Type":"ContainerStarted","Data":"1ecf0861fabebb22b8a12d75877898faf2d5ce39be06cbc05afae3cadd820a5e"} Jan 30 07:01:45 crc kubenswrapper[4520]: I0130 07:01:45.644830 4520 generic.go:334] "Generic (PLEG): container finished" podID="2c3b80d9-dfeb-4120-a523-4f4ceea700c8" containerID="fd8df293993504736656150628e4c21b6223e6d43580a46f13b22035758f47e9" exitCode=0 Jan 30 07:01:45 crc kubenswrapper[4520]: I0130 07:01:45.644926 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65948bc6c-vwm6m" event={"ID":"2c3b80d9-dfeb-4120-a523-4f4ceea700c8","Type":"ContainerDied","Data":"fd8df293993504736656150628e4c21b6223e6d43580a46f13b22035758f47e9"} Jan 30 07:01:45 crc kubenswrapper[4520]: I0130 07:01:45.645040 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65948bc6c-vwm6m" event={"ID":"2c3b80d9-dfeb-4120-a523-4f4ceea700c8","Type":"ContainerStarted","Data":"66ba0aa88e99f1b221939b7be8ecb45406d776207aeba9bddb50058900bb8875"} Jan 30 07:01:45 crc kubenswrapper[4520]: I0130 07:01:45.646883 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7c4c8c7bb-pfwmd" event={"ID":"2c99ef8b-2ef2-4e57-996c-d74afbaa161e","Type":"ContainerStarted","Data":"c9fb7ee3fa91ea11b57d39df2f3d728a96994401b7f2537408860e9070bb7589"} Jan 30 07:01:45 crc kubenswrapper[4520]: I0130 07:01:45.648861 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-66664cb669-j765l" event={"ID":"2ddf81a9-672a-457d-a233-087d73b4890a","Type":"ContainerStarted","Data":"a39eef770be28f66178eeb8a731b4dc4f69b4efcd4d3fc67897ab797017a40d8"} Jan 30 07:01:45 crc kubenswrapper[4520]: I0130 07:01:45.662670 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6b7486bc6d-lhplk" podStartSLOduration=2.662653113 podStartE2EDuration="2.662653113s" podCreationTimestamp="2026-01-30 07:01:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:01:45.653978193 +0000 UTC m=+1019.282330374" watchObservedRunningTime="2026-01-30 07:01:45.662653113 +0000 UTC m=+1019.291005293" Jan 30 07:01:46 crc kubenswrapper[4520]: I0130 07:01:46.762889 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65948bc6c-vwm6m" event={"ID":"2c3b80d9-dfeb-4120-a523-4f4ceea700c8","Type":"ContainerStarted","Data":"2baa15b0138f4a0324144dcf203015759eee61a7163e3c2fbf40f115e3f0cf58"} Jan 30 07:01:46 crc kubenswrapper[4520]: I0130 07:01:46.763177 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-65948bc6c-vwm6m" Jan 30 07:01:46 crc kubenswrapper[4520]: I0130 07:01:46.767929 4520 generic.go:334] "Generic (PLEG): container finished" podID="843b1d9d-26f2-42d5-b8ff-331b66efd5f8" containerID="7a924c89e2620139928431b96049e7da9bfa56fc19180750c7791ac6c14e31e9" exitCode=0 Jan 30 07:01:46 crc kubenswrapper[4520]: I0130 07:01:46.768001 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7hthh" event={"ID":"843b1d9d-26f2-42d5-b8ff-331b66efd5f8","Type":"ContainerDied","Data":"7a924c89e2620139928431b96049e7da9bfa56fc19180750c7791ac6c14e31e9"} Jan 30 07:01:46 crc kubenswrapper[4520]: I0130 07:01:46.794044 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9424be29-ccf4-449c-ad6a-dae1997dd5ab","Type":"ContainerStarted","Data":"1200d1a51eb59a07bd155e2cd066f1e8eaf9d811142a9383fa3598df70c08479"} Jan 30 07:01:46 crc kubenswrapper[4520]: I0130 07:01:46.850110 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-65948bc6c-vwm6m" podStartSLOduration=3.850088151 podStartE2EDuration="3.850088151s" podCreationTimestamp="2026-01-30 07:01:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:01:46.846665215 +0000 UTC m=+1020.475017395" watchObservedRunningTime="2026-01-30 07:01:46.850088151 +0000 UTC m=+1020.478440332" Jan 30 07:01:47 crc kubenswrapper[4520]: I0130 07:01:47.944987 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cvbx4"] Jan 30 07:01:47 crc kubenswrapper[4520]: I0130 07:01:47.946474 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cvbx4" Jan 30 07:01:47 crc kubenswrapper[4520]: I0130 07:01:47.953545 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 30 07:01:47 crc kubenswrapper[4520]: I0130 07:01:47.956825 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 07:01:47 crc kubenswrapper[4520]: I0130 07:01:47.960539 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-dtmtd" Jan 30 07:01:47 crc kubenswrapper[4520]: I0130 07:01:47.976192 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cvbx4"] Jan 30 07:01:48 crc kubenswrapper[4520]: I0130 07:01:48.039700 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12c4f381-2282-4c3c-8735-8862b07e65dc-config-data\") pod \"nova-cell0-conductor-db-sync-cvbx4\" (UID: \"12c4f381-2282-4c3c-8735-8862b07e65dc\") " pod="openstack/nova-cell0-conductor-db-sync-cvbx4" Jan 30 07:01:48 crc kubenswrapper[4520]: I0130 07:01:48.040091 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12c4f381-2282-4c3c-8735-8862b07e65dc-scripts\") pod \"nova-cell0-conductor-db-sync-cvbx4\" (UID: \"12c4f381-2282-4c3c-8735-8862b07e65dc\") " pod="openstack/nova-cell0-conductor-db-sync-cvbx4" Jan 30 07:01:48 crc kubenswrapper[4520]: I0130 07:01:48.040123 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12c4f381-2282-4c3c-8735-8862b07e65dc-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-cvbx4\" (UID: \"12c4f381-2282-4c3c-8735-8862b07e65dc\") " pod="openstack/nova-cell0-conductor-db-sync-cvbx4" Jan 30 07:01:48 crc kubenswrapper[4520]: I0130 07:01:48.040212 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsf4p\" (UniqueName: \"kubernetes.io/projected/12c4f381-2282-4c3c-8735-8862b07e65dc-kube-api-access-lsf4p\") pod \"nova-cell0-conductor-db-sync-cvbx4\" (UID: \"12c4f381-2282-4c3c-8735-8862b07e65dc\") " pod="openstack/nova-cell0-conductor-db-sync-cvbx4" Jan 30 07:01:48 crc kubenswrapper[4520]: I0130 07:01:48.142072 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsf4p\" (UniqueName: \"kubernetes.io/projected/12c4f381-2282-4c3c-8735-8862b07e65dc-kube-api-access-lsf4p\") pod \"nova-cell0-conductor-db-sync-cvbx4\" (UID: \"12c4f381-2282-4c3c-8735-8862b07e65dc\") " pod="openstack/nova-cell0-conductor-db-sync-cvbx4" Jan 30 07:01:48 crc kubenswrapper[4520]: I0130 07:01:48.142271 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12c4f381-2282-4c3c-8735-8862b07e65dc-config-data\") pod \"nova-cell0-conductor-db-sync-cvbx4\" (UID: \"12c4f381-2282-4c3c-8735-8862b07e65dc\") " pod="openstack/nova-cell0-conductor-db-sync-cvbx4" Jan 30 07:01:48 crc kubenswrapper[4520]: I0130 07:01:48.142442 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12c4f381-2282-4c3c-8735-8862b07e65dc-scripts\") pod \"nova-cell0-conductor-db-sync-cvbx4\" (UID: \"12c4f381-2282-4c3c-8735-8862b07e65dc\") " pod="openstack/nova-cell0-conductor-db-sync-cvbx4" Jan 30 07:01:48 crc kubenswrapper[4520]: I0130 07:01:48.142479 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12c4f381-2282-4c3c-8735-8862b07e65dc-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-cvbx4\" (UID: \"12c4f381-2282-4c3c-8735-8862b07e65dc\") " pod="openstack/nova-cell0-conductor-db-sync-cvbx4" Jan 30 07:01:48 crc kubenswrapper[4520]: I0130 07:01:48.153202 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12c4f381-2282-4c3c-8735-8862b07e65dc-scripts\") pod \"nova-cell0-conductor-db-sync-cvbx4\" (UID: \"12c4f381-2282-4c3c-8735-8862b07e65dc\") " pod="openstack/nova-cell0-conductor-db-sync-cvbx4" Jan 30 07:01:48 crc kubenswrapper[4520]: I0130 07:01:48.153708 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12c4f381-2282-4c3c-8735-8862b07e65dc-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-cvbx4\" (UID: \"12c4f381-2282-4c3c-8735-8862b07e65dc\") " pod="openstack/nova-cell0-conductor-db-sync-cvbx4" Jan 30 07:01:48 crc kubenswrapper[4520]: I0130 07:01:48.156638 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12c4f381-2282-4c3c-8735-8862b07e65dc-config-data\") pod \"nova-cell0-conductor-db-sync-cvbx4\" (UID: \"12c4f381-2282-4c3c-8735-8862b07e65dc\") " pod="openstack/nova-cell0-conductor-db-sync-cvbx4" Jan 30 07:01:48 crc kubenswrapper[4520]: I0130 07:01:48.167187 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsf4p\" (UniqueName: \"kubernetes.io/projected/12c4f381-2282-4c3c-8735-8862b07e65dc-kube-api-access-lsf4p\") pod \"nova-cell0-conductor-db-sync-cvbx4\" (UID: \"12c4f381-2282-4c3c-8735-8862b07e65dc\") " pod="openstack/nova-cell0-conductor-db-sync-cvbx4" Jan 30 07:01:48 crc kubenswrapper[4520]: I0130 07:01:48.297204 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7d85f5b788-9fjcm" Jan 30 07:01:48 crc kubenswrapper[4520]: I0130 07:01:48.356754 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cvbx4" Jan 30 07:01:48 crc kubenswrapper[4520]: I0130 07:01:48.732966 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7d85f5b788-9fjcm" Jan 30 07:01:48 crc kubenswrapper[4520]: I0130 07:01:48.843195 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-bf4fcb464-scxkz"] Jan 30 07:01:48 crc kubenswrapper[4520]: I0130 07:01:48.843358 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7c4c8c7bb-pfwmd" event={"ID":"2c99ef8b-2ef2-4e57-996c-d74afbaa161e","Type":"ContainerStarted","Data":"512ec0cfd4c2aa6ff9b71fad6954f1bf869d19bc298f81792cad52578cc47ac2"} Jan 30 07:01:48 crc kubenswrapper[4520]: I0130 07:01:48.843438 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-bf4fcb464-scxkz" podUID="5aad2c74-01f1-4dd2-95b4-5e4299adcb99" containerName="placement-log" containerID="cri-o://5d399f358896d91393d236f63b604bde350dc5c8e3ef19d92cc3d285d1ad44a1" gracePeriod=30 Jan 30 07:01:48 crc kubenswrapper[4520]: I0130 07:01:48.844527 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7c4c8c7bb-pfwmd" Jan 30 07:01:48 crc kubenswrapper[4520]: I0130 07:01:48.844879 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-bf4fcb464-scxkz" podUID="5aad2c74-01f1-4dd2-95b4-5e4299adcb99" containerName="placement-api" containerID="cri-o://43e045f8d849846804154cb1fda8fb475c7e12ef5d17a3328b4f99e3ec97f433" gracePeriod=30 Jan 30 07:01:48 crc kubenswrapper[4520]: I0130 07:01:48.907601 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9424be29-ccf4-449c-ad6a-dae1997dd5ab","Type":"ContainerStarted","Data":"23266db8726f41f3525bfd8730816a6c7cb62872913468ab7cc524c262a4e89c"} Jan 30 07:01:48 crc kubenswrapper[4520]: I0130 07:01:48.907670 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 07:01:48 crc kubenswrapper[4520]: I0130 07:01:48.954399 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7c4c8c7bb-pfwmd" podStartSLOduration=2.195693225 podStartE2EDuration="4.954362369s" podCreationTimestamp="2026-01-30 07:01:44 +0000 UTC" firstStartedPulling="2026-01-30 07:01:45.015772993 +0000 UTC m=+1018.644125175" lastFinishedPulling="2026-01-30 07:01:47.774442139 +0000 UTC m=+1021.402794319" observedRunningTime="2026-01-30 07:01:48.871810776 +0000 UTC m=+1022.500162957" watchObservedRunningTime="2026-01-30 07:01:48.954362369 +0000 UTC m=+1022.582714550" Jan 30 07:01:48 crc kubenswrapper[4520]: I0130 07:01:48.957891 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=9.037627372 podStartE2EDuration="13.957872758s" podCreationTimestamp="2026-01-30 07:01:35 +0000 UTC" firstStartedPulling="2026-01-30 07:01:42.852054576 +0000 UTC m=+1016.480406757" lastFinishedPulling="2026-01-30 07:01:47.772299962 +0000 UTC m=+1021.400652143" observedRunningTime="2026-01-30 07:01:48.93259297 +0000 UTC m=+1022.560945151" watchObservedRunningTime="2026-01-30 07:01:48.957872758 +0000 UTC m=+1022.586224939" Jan 30 07:01:49 crc kubenswrapper[4520]: I0130 07:01:49.000426 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cvbx4"] Jan 30 07:01:49 crc kubenswrapper[4520]: W0130 07:01:49.687152 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12c4f381_2282_4c3c_8735_8862b07e65dc.slice/crio-b735566a48a7b707de0ff1b0d7d679b4fdefa9a99920d9f9f1de3665fc38436c WatchSource:0}: Error finding container b735566a48a7b707de0ff1b0d7d679b4fdefa9a99920d9f9f1de3665fc38436c: Status 404 returned error can't find the container with id b735566a48a7b707de0ff1b0d7d679b4fdefa9a99920d9f9f1de3665fc38436c Jan 30 07:01:49 crc kubenswrapper[4520]: I0130 07:01:49.924988 4520 generic.go:334] "Generic (PLEG): container finished" podID="5aad2c74-01f1-4dd2-95b4-5e4299adcb99" containerID="5d399f358896d91393d236f63b604bde350dc5c8e3ef19d92cc3d285d1ad44a1" exitCode=143 Jan 30 07:01:49 crc kubenswrapper[4520]: I0130 07:01:49.925220 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bf4fcb464-scxkz" event={"ID":"5aad2c74-01f1-4dd2-95b4-5e4299adcb99","Type":"ContainerDied","Data":"5d399f358896d91393d236f63b604bde350dc5c8e3ef19d92cc3d285d1ad44a1"} Jan 30 07:01:49 crc kubenswrapper[4520]: I0130 07:01:49.929703 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cvbx4" event={"ID":"12c4f381-2282-4c3c-8735-8862b07e65dc","Type":"ContainerStarted","Data":"b735566a48a7b707de0ff1b0d7d679b4fdefa9a99920d9f9f1de3665fc38436c"} Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.662837 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-dc9bfd46d-rs8m5"] Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.664010 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-dc9bfd46d-rs8m5" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.673380 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5ccfff75db-kf7nx"] Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.674268 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5ccfff75db-kf7nx" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.702487 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5ccfff75db-kf7nx"] Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.713101 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-8b6685bb8-85zvh"] Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.714977 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8b6685bb8-85zvh" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.718015 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-dc9bfd46d-rs8m5"] Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.721536 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4pqv\" (UniqueName: \"kubernetes.io/projected/b4547396-9131-4720-a6b8-494a717b3b31-kube-api-access-q4pqv\") pod \"heat-engine-5ccfff75db-kf7nx\" (UID: \"b4547396-9131-4720-a6b8-494a717b3b31\") " pod="openstack/heat-engine-5ccfff75db-kf7nx" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.721608 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2541934c-0c62-4b72-b405-bfc672fc5568-config-data-custom\") pod \"heat-cfnapi-dc9bfd46d-rs8m5\" (UID: \"2541934c-0c62-4b72-b405-bfc672fc5568\") " pod="openstack/heat-cfnapi-dc9bfd46d-rs8m5" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.721670 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2541934c-0c62-4b72-b405-bfc672fc5568-config-data\") pod \"heat-cfnapi-dc9bfd46d-rs8m5\" (UID: \"2541934c-0c62-4b72-b405-bfc672fc5568\") " pod="openstack/heat-cfnapi-dc9bfd46d-rs8m5" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.721699 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4547396-9131-4720-a6b8-494a717b3b31-config-data\") pod \"heat-engine-5ccfff75db-kf7nx\" (UID: \"b4547396-9131-4720-a6b8-494a717b3b31\") " pod="openstack/heat-engine-5ccfff75db-kf7nx" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.721815 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvr5n\" (UniqueName: \"kubernetes.io/projected/2541934c-0c62-4b72-b405-bfc672fc5568-kube-api-access-wvr5n\") pod \"heat-cfnapi-dc9bfd46d-rs8m5\" (UID: \"2541934c-0c62-4b72-b405-bfc672fc5568\") " pod="openstack/heat-cfnapi-dc9bfd46d-rs8m5" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.721869 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4547396-9131-4720-a6b8-494a717b3b31-config-data-custom\") pod \"heat-engine-5ccfff75db-kf7nx\" (UID: \"b4547396-9131-4720-a6b8-494a717b3b31\") " pod="openstack/heat-engine-5ccfff75db-kf7nx" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.721921 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4547396-9131-4720-a6b8-494a717b3b31-combined-ca-bundle\") pod \"heat-engine-5ccfff75db-kf7nx\" (UID: \"b4547396-9131-4720-a6b8-494a717b3b31\") " pod="openstack/heat-engine-5ccfff75db-kf7nx" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.721940 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2541934c-0c62-4b72-b405-bfc672fc5568-combined-ca-bundle\") pod \"heat-cfnapi-dc9bfd46d-rs8m5\" (UID: \"2541934c-0c62-4b72-b405-bfc672fc5568\") " pod="openstack/heat-cfnapi-dc9bfd46d-rs8m5" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.756147 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-8b6685bb8-85zvh"] Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.824539 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvr5n\" (UniqueName: \"kubernetes.io/projected/2541934c-0c62-4b72-b405-bfc672fc5568-kube-api-access-wvr5n\") pod \"heat-cfnapi-dc9bfd46d-rs8m5\" (UID: \"2541934c-0c62-4b72-b405-bfc672fc5568\") " pod="openstack/heat-cfnapi-dc9bfd46d-rs8m5" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.824839 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4547396-9131-4720-a6b8-494a717b3b31-config-data-custom\") pod \"heat-engine-5ccfff75db-kf7nx\" (UID: \"b4547396-9131-4720-a6b8-494a717b3b31\") " pod="openstack/heat-engine-5ccfff75db-kf7nx" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.824887 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4547396-9131-4720-a6b8-494a717b3b31-combined-ca-bundle\") pod \"heat-engine-5ccfff75db-kf7nx\" (UID: \"b4547396-9131-4720-a6b8-494a717b3b31\") " pod="openstack/heat-engine-5ccfff75db-kf7nx" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.824904 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2541934c-0c62-4b72-b405-bfc672fc5568-combined-ca-bundle\") pod \"heat-cfnapi-dc9bfd46d-rs8m5\" (UID: \"2541934c-0c62-4b72-b405-bfc672fc5568\") " pod="openstack/heat-cfnapi-dc9bfd46d-rs8m5" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.824933 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4pqv\" (UniqueName: \"kubernetes.io/projected/b4547396-9131-4720-a6b8-494a717b3b31-kube-api-access-q4pqv\") pod \"heat-engine-5ccfff75db-kf7nx\" (UID: \"b4547396-9131-4720-a6b8-494a717b3b31\") " pod="openstack/heat-engine-5ccfff75db-kf7nx" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.824958 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/015102e7-4492-43ab-b32c-8938836fc162-config-data\") pod \"heat-api-8b6685bb8-85zvh\" (UID: \"015102e7-4492-43ab-b32c-8938836fc162\") " pod="openstack/heat-api-8b6685bb8-85zvh" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.824976 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015102e7-4492-43ab-b32c-8938836fc162-combined-ca-bundle\") pod \"heat-api-8b6685bb8-85zvh\" (UID: \"015102e7-4492-43ab-b32c-8938836fc162\") " pod="openstack/heat-api-8b6685bb8-85zvh" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.825003 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2541934c-0c62-4b72-b405-bfc672fc5568-config-data-custom\") pod \"heat-cfnapi-dc9bfd46d-rs8m5\" (UID: \"2541934c-0c62-4b72-b405-bfc672fc5568\") " pod="openstack/heat-cfnapi-dc9bfd46d-rs8m5" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.825030 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2541934c-0c62-4b72-b405-bfc672fc5568-config-data\") pod \"heat-cfnapi-dc9bfd46d-rs8m5\" (UID: \"2541934c-0c62-4b72-b405-bfc672fc5568\") " pod="openstack/heat-cfnapi-dc9bfd46d-rs8m5" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.825055 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4547396-9131-4720-a6b8-494a717b3b31-config-data\") pod \"heat-engine-5ccfff75db-kf7nx\" (UID: \"b4547396-9131-4720-a6b8-494a717b3b31\") " pod="openstack/heat-engine-5ccfff75db-kf7nx" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.826503 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/015102e7-4492-43ab-b32c-8938836fc162-config-data-custom\") pod \"heat-api-8b6685bb8-85zvh\" (UID: \"015102e7-4492-43ab-b32c-8938836fc162\") " pod="openstack/heat-api-8b6685bb8-85zvh" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.826558 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mdvk\" (UniqueName: \"kubernetes.io/projected/015102e7-4492-43ab-b32c-8938836fc162-kube-api-access-6mdvk\") pod \"heat-api-8b6685bb8-85zvh\" (UID: \"015102e7-4492-43ab-b32c-8938836fc162\") " pod="openstack/heat-api-8b6685bb8-85zvh" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.839446 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2541934c-0c62-4b72-b405-bfc672fc5568-combined-ca-bundle\") pod \"heat-cfnapi-dc9bfd46d-rs8m5\" (UID: \"2541934c-0c62-4b72-b405-bfc672fc5568\") " pod="openstack/heat-cfnapi-dc9bfd46d-rs8m5" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.839469 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2541934c-0c62-4b72-b405-bfc672fc5568-config-data-custom\") pod \"heat-cfnapi-dc9bfd46d-rs8m5\" (UID: \"2541934c-0c62-4b72-b405-bfc672fc5568\") " pod="openstack/heat-cfnapi-dc9bfd46d-rs8m5" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.841201 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4547396-9131-4720-a6b8-494a717b3b31-combined-ca-bundle\") pod \"heat-engine-5ccfff75db-kf7nx\" (UID: \"b4547396-9131-4720-a6b8-494a717b3b31\") " pod="openstack/heat-engine-5ccfff75db-kf7nx" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.842050 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4547396-9131-4720-a6b8-494a717b3b31-config-data-custom\") pod \"heat-engine-5ccfff75db-kf7nx\" (UID: \"b4547396-9131-4720-a6b8-494a717b3b31\") " pod="openstack/heat-engine-5ccfff75db-kf7nx" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.842209 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2541934c-0c62-4b72-b405-bfc672fc5568-config-data\") pod \"heat-cfnapi-dc9bfd46d-rs8m5\" (UID: \"2541934c-0c62-4b72-b405-bfc672fc5568\") " pod="openstack/heat-cfnapi-dc9bfd46d-rs8m5" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.846443 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4pqv\" (UniqueName: \"kubernetes.io/projected/b4547396-9131-4720-a6b8-494a717b3b31-kube-api-access-q4pqv\") pod \"heat-engine-5ccfff75db-kf7nx\" (UID: \"b4547396-9131-4720-a6b8-494a717b3b31\") " pod="openstack/heat-engine-5ccfff75db-kf7nx" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.857268 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4547396-9131-4720-a6b8-494a717b3b31-config-data\") pod \"heat-engine-5ccfff75db-kf7nx\" (UID: \"b4547396-9131-4720-a6b8-494a717b3b31\") " pod="openstack/heat-engine-5ccfff75db-kf7nx" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.874762 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvr5n\" (UniqueName: \"kubernetes.io/projected/2541934c-0c62-4b72-b405-bfc672fc5568-kube-api-access-wvr5n\") pod \"heat-cfnapi-dc9bfd46d-rs8m5\" (UID: \"2541934c-0c62-4b72-b405-bfc672fc5568\") " pod="openstack/heat-cfnapi-dc9bfd46d-rs8m5" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.928335 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/015102e7-4492-43ab-b32c-8938836fc162-config-data\") pod \"heat-api-8b6685bb8-85zvh\" (UID: \"015102e7-4492-43ab-b32c-8938836fc162\") " pod="openstack/heat-api-8b6685bb8-85zvh" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.928466 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015102e7-4492-43ab-b32c-8938836fc162-combined-ca-bundle\") pod \"heat-api-8b6685bb8-85zvh\" (UID: \"015102e7-4492-43ab-b32c-8938836fc162\") " pod="openstack/heat-api-8b6685bb8-85zvh" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.928585 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/015102e7-4492-43ab-b32c-8938836fc162-config-data-custom\") pod \"heat-api-8b6685bb8-85zvh\" (UID: \"015102e7-4492-43ab-b32c-8938836fc162\") " pod="openstack/heat-api-8b6685bb8-85zvh" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.928629 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mdvk\" (UniqueName: \"kubernetes.io/projected/015102e7-4492-43ab-b32c-8938836fc162-kube-api-access-6mdvk\") pod \"heat-api-8b6685bb8-85zvh\" (UID: \"015102e7-4492-43ab-b32c-8938836fc162\") " pod="openstack/heat-api-8b6685bb8-85zvh" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.935143 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/015102e7-4492-43ab-b32c-8938836fc162-config-data\") pod \"heat-api-8b6685bb8-85zvh\" (UID: \"015102e7-4492-43ab-b32c-8938836fc162\") " pod="openstack/heat-api-8b6685bb8-85zvh" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.941055 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/015102e7-4492-43ab-b32c-8938836fc162-config-data-custom\") pod \"heat-api-8b6685bb8-85zvh\" (UID: \"015102e7-4492-43ab-b32c-8938836fc162\") " pod="openstack/heat-api-8b6685bb8-85zvh" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.941553 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015102e7-4492-43ab-b32c-8938836fc162-combined-ca-bundle\") pod \"heat-api-8b6685bb8-85zvh\" (UID: \"015102e7-4492-43ab-b32c-8938836fc162\") " pod="openstack/heat-api-8b6685bb8-85zvh" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.943718 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-66664cb669-j765l" event={"ID":"2ddf81a9-672a-457d-a233-087d73b4890a","Type":"ContainerStarted","Data":"f18ad98637686cac8c96aa757e383fb6eab069e5958ca7184a1ce856a37251cd"} Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.944652 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-66664cb669-j765l" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.955052 4520 generic.go:334] "Generic (PLEG): container finished" podID="3380703e-5659-4040-8b43-e3ada0eaa6b6" containerID="2b747fc744b96278e67ea47a8f4cfb4393466c3789a5b3eca465bed0bea2d640" exitCode=137 Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.955118 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mdvk\" (UniqueName: \"kubernetes.io/projected/015102e7-4492-43ab-b32c-8938836fc162-kube-api-access-6mdvk\") pod \"heat-api-8b6685bb8-85zvh\" (UID: \"015102e7-4492-43ab-b32c-8938836fc162\") " pod="openstack/heat-api-8b6685bb8-85zvh" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.955280 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c459697cb-g922m" event={"ID":"3380703e-5659-4040-8b43-e3ada0eaa6b6","Type":"ContainerDied","Data":"2b747fc744b96278e67ea47a8f4cfb4393466c3789a5b3eca465bed0bea2d640"} Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.968154 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-66664cb669-j765l" podStartSLOduration=2.413738554 podStartE2EDuration="6.968143854s" podCreationTimestamp="2026-01-30 07:01:44 +0000 UTC" firstStartedPulling="2026-01-30 07:01:45.161878491 +0000 UTC m=+1018.790230673" lastFinishedPulling="2026-01-30 07:01:49.716283792 +0000 UTC m=+1023.344635973" observedRunningTime="2026-01-30 07:01:50.965793565 +0000 UTC m=+1024.594145746" watchObservedRunningTime="2026-01-30 07:01:50.968143854 +0000 UTC m=+1024.596496036" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.974585 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7hthh" event={"ID":"843b1d9d-26f2-42d5-b8ff-331b66efd5f8","Type":"ContainerStarted","Data":"b99c38bf6ffe9fb2362232ef28015a21fb9eefcaaf49a2018073a81502294137"} Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.978721 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-dc9bfd46d-rs8m5" Jan 30 07:01:50 crc kubenswrapper[4520]: I0130 07:01:50.998725 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5ccfff75db-kf7nx" Jan 30 07:01:51 crc kubenswrapper[4520]: I0130 07:01:51.011451 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7hthh" podStartSLOduration=11.204962241 podStartE2EDuration="16.011430336s" podCreationTimestamp="2026-01-30 07:01:35 +0000 UTC" firstStartedPulling="2026-01-30 07:01:43.550976407 +0000 UTC m=+1017.179328588" lastFinishedPulling="2026-01-30 07:01:48.357444503 +0000 UTC m=+1021.985796683" observedRunningTime="2026-01-30 07:01:50.993265786 +0000 UTC m=+1024.621617966" watchObservedRunningTime="2026-01-30 07:01:51.011430336 +0000 UTC m=+1024.639782516" Jan 30 07:01:51 crc kubenswrapper[4520]: I0130 07:01:51.060474 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8b6685bb8-85zvh" Jan 30 07:01:51 crc kubenswrapper[4520]: I0130 07:01:51.543861 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c459697cb-g922m" Jan 30 07:01:51 crc kubenswrapper[4520]: I0130 07:01:51.695229 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3380703e-5659-4040-8b43-e3ada0eaa6b6-logs\") pod \"3380703e-5659-4040-8b43-e3ada0eaa6b6\" (UID: \"3380703e-5659-4040-8b43-e3ada0eaa6b6\") " Jan 30 07:01:51 crc kubenswrapper[4520]: I0130 07:01:51.695279 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3380703e-5659-4040-8b43-e3ada0eaa6b6-config-data\") pod \"3380703e-5659-4040-8b43-e3ada0eaa6b6\" (UID: \"3380703e-5659-4040-8b43-e3ada0eaa6b6\") " Jan 30 07:01:51 crc kubenswrapper[4520]: I0130 07:01:51.695298 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp8xh\" (UniqueName: \"kubernetes.io/projected/3380703e-5659-4040-8b43-e3ada0eaa6b6-kube-api-access-pp8xh\") pod \"3380703e-5659-4040-8b43-e3ada0eaa6b6\" (UID: \"3380703e-5659-4040-8b43-e3ada0eaa6b6\") " Jan 30 07:01:51 crc kubenswrapper[4520]: I0130 07:01:51.695423 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3380703e-5659-4040-8b43-e3ada0eaa6b6-combined-ca-bundle\") pod \"3380703e-5659-4040-8b43-e3ada0eaa6b6\" (UID: \"3380703e-5659-4040-8b43-e3ada0eaa6b6\") " Jan 30 07:01:51 crc kubenswrapper[4520]: I0130 07:01:51.695527 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3380703e-5659-4040-8b43-e3ada0eaa6b6-horizon-tls-certs\") pod \"3380703e-5659-4040-8b43-e3ada0eaa6b6\" (UID: \"3380703e-5659-4040-8b43-e3ada0eaa6b6\") " Jan 30 07:01:51 crc kubenswrapper[4520]: I0130 07:01:51.695572 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3380703e-5659-4040-8b43-e3ada0eaa6b6-scripts\") pod \"3380703e-5659-4040-8b43-e3ada0eaa6b6\" (UID: \"3380703e-5659-4040-8b43-e3ada0eaa6b6\") " Jan 30 07:01:51 crc kubenswrapper[4520]: I0130 07:01:51.695658 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3380703e-5659-4040-8b43-e3ada0eaa6b6-horizon-secret-key\") pod \"3380703e-5659-4040-8b43-e3ada0eaa6b6\" (UID: \"3380703e-5659-4040-8b43-e3ada0eaa6b6\") " Jan 30 07:01:51 crc kubenswrapper[4520]: I0130 07:01:51.700494 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3380703e-5659-4040-8b43-e3ada0eaa6b6-logs" (OuterVolumeSpecName: "logs") pod "3380703e-5659-4040-8b43-e3ada0eaa6b6" (UID: "3380703e-5659-4040-8b43-e3ada0eaa6b6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:01:51 crc kubenswrapper[4520]: I0130 07:01:51.704970 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3380703e-5659-4040-8b43-e3ada0eaa6b6-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "3380703e-5659-4040-8b43-e3ada0eaa6b6" (UID: "3380703e-5659-4040-8b43-e3ada0eaa6b6"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:51 crc kubenswrapper[4520]: I0130 07:01:51.708633 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3380703e-5659-4040-8b43-e3ada0eaa6b6-kube-api-access-pp8xh" (OuterVolumeSpecName: "kube-api-access-pp8xh") pod "3380703e-5659-4040-8b43-e3ada0eaa6b6" (UID: "3380703e-5659-4040-8b43-e3ada0eaa6b6"). InnerVolumeSpecName "kube-api-access-pp8xh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:01:51 crc kubenswrapper[4520]: I0130 07:01:51.777464 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3380703e-5659-4040-8b43-e3ada0eaa6b6-scripts" (OuterVolumeSpecName: "scripts") pod "3380703e-5659-4040-8b43-e3ada0eaa6b6" (UID: "3380703e-5659-4040-8b43-e3ada0eaa6b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:01:51 crc kubenswrapper[4520]: I0130 07:01:51.788764 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3380703e-5659-4040-8b43-e3ada0eaa6b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3380703e-5659-4040-8b43-e3ada0eaa6b6" (UID: "3380703e-5659-4040-8b43-e3ada0eaa6b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:51 crc kubenswrapper[4520]: I0130 07:01:51.799743 4520 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3380703e-5659-4040-8b43-e3ada0eaa6b6-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:51 crc kubenswrapper[4520]: I0130 07:01:51.799744 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3380703e-5659-4040-8b43-e3ada0eaa6b6-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "3380703e-5659-4040-8b43-e3ada0eaa6b6" (UID: "3380703e-5659-4040-8b43-e3ada0eaa6b6"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:51 crc kubenswrapper[4520]: I0130 07:01:51.799771 4520 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3380703e-5659-4040-8b43-e3ada0eaa6b6-logs\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:51 crc kubenswrapper[4520]: I0130 07:01:51.799785 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp8xh\" (UniqueName: \"kubernetes.io/projected/3380703e-5659-4040-8b43-e3ada0eaa6b6-kube-api-access-pp8xh\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:51 crc kubenswrapper[4520]: I0130 07:01:51.799796 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3380703e-5659-4040-8b43-e3ada0eaa6b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:51 crc kubenswrapper[4520]: I0130 07:01:51.799807 4520 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3380703e-5659-4040-8b43-e3ada0eaa6b6-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:51 crc kubenswrapper[4520]: I0130 07:01:51.811135 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3380703e-5659-4040-8b43-e3ada0eaa6b6-config-data" (OuterVolumeSpecName: "config-data") pod "3380703e-5659-4040-8b43-e3ada0eaa6b6" (UID: "3380703e-5659-4040-8b43-e3ada0eaa6b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:01:51 crc kubenswrapper[4520]: W0130 07:01:51.855202 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2541934c_0c62_4b72_b405_bfc672fc5568.slice/crio-a700c772d114507a9f099ecb09acd8d40ce292c53899924f1e3a26a466f4ba7d WatchSource:0}: Error finding container a700c772d114507a9f099ecb09acd8d40ce292c53899924f1e3a26a466f4ba7d: Status 404 returned error can't find the container with id a700c772d114507a9f099ecb09acd8d40ce292c53899924f1e3a26a466f4ba7d Jan 30 07:01:51 crc kubenswrapper[4520]: I0130 07:01:51.863410 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-dc9bfd46d-rs8m5"] Jan 30 07:01:51 crc kubenswrapper[4520]: I0130 07:01:51.901910 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3380703e-5659-4040-8b43-e3ada0eaa6b6-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:51 crc kubenswrapper[4520]: I0130 07:01:51.901941 4520 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3380703e-5659-4040-8b43-e3ada0eaa6b6-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:51 crc kubenswrapper[4520]: I0130 07:01:51.992916 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-dc9bfd46d-rs8m5" event={"ID":"2541934c-0c62-4b72-b405-bfc672fc5568","Type":"ContainerStarted","Data":"a700c772d114507a9f099ecb09acd8d40ce292c53899924f1e3a26a466f4ba7d"} Jan 30 07:01:52 crc kubenswrapper[4520]: I0130 07:01:52.018152 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-8b6685bb8-85zvh"] Jan 30 07:01:52 crc kubenswrapper[4520]: I0130 07:01:52.024377 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c459697cb-g922m" Jan 30 07:01:52 crc kubenswrapper[4520]: I0130 07:01:52.025599 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c459697cb-g922m" event={"ID":"3380703e-5659-4040-8b43-e3ada0eaa6b6","Type":"ContainerDied","Data":"06337955ea19d6817ae9d812ae722f4d62d5f6f41377f0a593f497c064f9b33c"} Jan 30 07:01:52 crc kubenswrapper[4520]: I0130 07:01:52.025711 4520 scope.go:117] "RemoveContainer" containerID="d03bf2e75cec449c2d1120c53868d2b6ad99cf296b31eb75a042471f6bea2caa" Jan 30 07:01:52 crc kubenswrapper[4520]: I0130 07:01:52.156488 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5ccfff75db-kf7nx"] Jan 30 07:01:52 crc kubenswrapper[4520]: I0130 07:01:52.194559 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-c459697cb-g922m"] Jan 30 07:01:52 crc kubenswrapper[4520]: I0130 07:01:52.219535 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-c459697cb-g922m"] Jan 30 07:01:52 crc kubenswrapper[4520]: I0130 07:01:52.467805 4520 scope.go:117] "RemoveContainer" containerID="2b747fc744b96278e67ea47a8f4cfb4393466c3789a5b3eca465bed0bea2d640" Jan 30 07:01:52 crc kubenswrapper[4520]: W0130 07:01:52.484470 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4547396_9131_4720_a6b8_494a717b3b31.slice/crio-ca2ab9752276d62b0b3d6a19a9306e9d2a47baac7b9aaa3061cfaa09cf754987 WatchSource:0}: Error finding container ca2ab9752276d62b0b3d6a19a9306e9d2a47baac7b9aaa3061cfaa09cf754987: Status 404 returned error can't find the container with id ca2ab9752276d62b0b3d6a19a9306e9d2a47baac7b9aaa3061cfaa09cf754987 Jan 30 07:01:52 crc kubenswrapper[4520]: I0130 07:01:52.700603 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3380703e-5659-4040-8b43-e3ada0eaa6b6" path="/var/lib/kubelet/pods/3380703e-5659-4040-8b43-e3ada0eaa6b6/volumes" Jan 30 07:01:52 crc kubenswrapper[4520]: I0130 07:01:52.857441 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bf4fcb464-scxkz" Jan 30 07:01:52 crc kubenswrapper[4520]: I0130 07:01:52.932496 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-scripts\") pod \"5aad2c74-01f1-4dd2-95b4-5e4299adcb99\" (UID: \"5aad2c74-01f1-4dd2-95b4-5e4299adcb99\") " Jan 30 07:01:52 crc kubenswrapper[4520]: I0130 07:01:52.932584 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-combined-ca-bundle\") pod \"5aad2c74-01f1-4dd2-95b4-5e4299adcb99\" (UID: \"5aad2c74-01f1-4dd2-95b4-5e4299adcb99\") " Jan 30 07:01:52 crc kubenswrapper[4520]: I0130 07:01:52.932677 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-public-tls-certs\") pod \"5aad2c74-01f1-4dd2-95b4-5e4299adcb99\" (UID: \"5aad2c74-01f1-4dd2-95b4-5e4299adcb99\") " Jan 30 07:01:52 crc kubenswrapper[4520]: I0130 07:01:52.932760 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-config-data\") pod \"5aad2c74-01f1-4dd2-95b4-5e4299adcb99\" (UID: \"5aad2c74-01f1-4dd2-95b4-5e4299adcb99\") " Jan 30 07:01:52 crc kubenswrapper[4520]: I0130 07:01:52.932865 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-logs\") pod \"5aad2c74-01f1-4dd2-95b4-5e4299adcb99\" (UID: \"5aad2c74-01f1-4dd2-95b4-5e4299adcb99\") " Jan 30 07:01:52 crc kubenswrapper[4520]: I0130 07:01:52.932996 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn2hn\" (UniqueName: \"kubernetes.io/projected/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-kube-api-access-pn2hn\") pod \"5aad2c74-01f1-4dd2-95b4-5e4299adcb99\" (UID: \"5aad2c74-01f1-4dd2-95b4-5e4299adcb99\") " Jan 30 07:01:52 crc kubenswrapper[4520]: I0130 07:01:52.933052 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-internal-tls-certs\") pod \"5aad2c74-01f1-4dd2-95b4-5e4299adcb99\" (UID: \"5aad2c74-01f1-4dd2-95b4-5e4299adcb99\") " Jan 30 07:01:52 crc kubenswrapper[4520]: I0130 07:01:52.936138 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-logs" (OuterVolumeSpecName: "logs") pod "5aad2c74-01f1-4dd2-95b4-5e4299adcb99" (UID: "5aad2c74-01f1-4dd2-95b4-5e4299adcb99"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:01:52 crc kubenswrapper[4520]: I0130 07:01:52.945030 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-kube-api-access-pn2hn" (OuterVolumeSpecName: "kube-api-access-pn2hn") pod "5aad2c74-01f1-4dd2-95b4-5e4299adcb99" (UID: "5aad2c74-01f1-4dd2-95b4-5e4299adcb99"). InnerVolumeSpecName "kube-api-access-pn2hn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:01:52 crc kubenswrapper[4520]: I0130 07:01:52.953556 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-scripts" (OuterVolumeSpecName: "scripts") pod "5aad2c74-01f1-4dd2-95b4-5e4299adcb99" (UID: "5aad2c74-01f1-4dd2-95b4-5e4299adcb99"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.036696 4520 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-logs\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.036730 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn2hn\" (UniqueName: \"kubernetes.io/projected/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-kube-api-access-pn2hn\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.036742 4520 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.076201 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5ccfff75db-kf7nx" event={"ID":"b4547396-9131-4720-a6b8-494a717b3b31","Type":"ContainerStarted","Data":"6c6c675f9fe9f4d8e9fc587362698ce1c9f468adda74cb5ceb69f6ab6ba749d5"} Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.076270 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5ccfff75db-kf7nx" event={"ID":"b4547396-9131-4720-a6b8-494a717b3b31","Type":"ContainerStarted","Data":"ca2ab9752276d62b0b3d6a19a9306e9d2a47baac7b9aaa3061cfaa09cf754987"} Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.088394 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5ccfff75db-kf7nx" Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.090005 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-config-data" (OuterVolumeSpecName: "config-data") pod "5aad2c74-01f1-4dd2-95b4-5e4299adcb99" (UID: "5aad2c74-01f1-4dd2-95b4-5e4299adcb99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.106411 4520 generic.go:334] "Generic (PLEG): container finished" podID="5aad2c74-01f1-4dd2-95b4-5e4299adcb99" containerID="43e045f8d849846804154cb1fda8fb475c7e12ef5d17a3328b4f99e3ec97f433" exitCode=0 Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.106552 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bf4fcb464-scxkz" event={"ID":"5aad2c74-01f1-4dd2-95b4-5e4299adcb99","Type":"ContainerDied","Data":"43e045f8d849846804154cb1fda8fb475c7e12ef5d17a3328b4f99e3ec97f433"} Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.106578 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bf4fcb464-scxkz" event={"ID":"5aad2c74-01f1-4dd2-95b4-5e4299adcb99","Type":"ContainerDied","Data":"556e9625966c7cb4e3e7a23fa74c7655fdabb1a8eb235006f30bdcb0198383d3"} Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.106611 4520 scope.go:117] "RemoveContainer" containerID="43e045f8d849846804154cb1fda8fb475c7e12ef5d17a3328b4f99e3ec97f433" Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.106838 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bf4fcb464-scxkz" Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.125912 4520 generic.go:334] "Generic (PLEG): container finished" podID="2541934c-0c62-4b72-b405-bfc672fc5568" containerID="80490cbef70f2712e268e2b20558e2a24aebb253079cbcdfc87dcd5ab2be5fa6" exitCode=1 Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.125999 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-dc9bfd46d-rs8m5" event={"ID":"2541934c-0c62-4b72-b405-bfc672fc5568","Type":"ContainerDied","Data":"80490cbef70f2712e268e2b20558e2a24aebb253079cbcdfc87dcd5ab2be5fa6"} Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.126928 4520 scope.go:117] "RemoveContainer" containerID="80490cbef70f2712e268e2b20558e2a24aebb253079cbcdfc87dcd5ab2be5fa6" Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.139983 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.142583 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5ccfff75db-kf7nx" podStartSLOduration=3.142558711 podStartE2EDuration="3.142558711s" podCreationTimestamp="2026-01-30 07:01:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:01:53.131199897 +0000 UTC m=+1026.759552078" watchObservedRunningTime="2026-01-30 07:01:53.142558711 +0000 UTC m=+1026.770910892" Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.144678 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8b6685bb8-85zvh" event={"ID":"015102e7-4492-43ab-b32c-8938836fc162","Type":"ContainerStarted","Data":"ada3f181e1cdeebfe2b3aae79f5f524017ab9e7f3a2501db42e524c2ff912948"} Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.144710 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8b6685bb8-85zvh" event={"ID":"015102e7-4492-43ab-b32c-8938836fc162","Type":"ContainerStarted","Data":"0fd0388d8508d4cd8b4ae0ec9b1917f063cbfd70d9fa6ed70ed2ac660fbc0ed7"} Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.145125 4520 scope.go:117] "RemoveContainer" containerID="ada3f181e1cdeebfe2b3aae79f5f524017ab9e7f3a2501db42e524c2ff912948" Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.158061 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5aad2c74-01f1-4dd2-95b4-5e4299adcb99" (UID: "5aad2c74-01f1-4dd2-95b4-5e4299adcb99"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.184144 4520 scope.go:117] "RemoveContainer" containerID="5d399f358896d91393d236f63b604bde350dc5c8e3ef19d92cc3d285d1ad44a1" Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.196760 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5aad2c74-01f1-4dd2-95b4-5e4299adcb99" (UID: "5aad2c74-01f1-4dd2-95b4-5e4299adcb99"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.241707 4520 scope.go:117] "RemoveContainer" containerID="43e045f8d849846804154cb1fda8fb475c7e12ef5d17a3328b4f99e3ec97f433" Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.244556 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.244584 4520 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:53 crc kubenswrapper[4520]: E0130 07:01:53.245661 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43e045f8d849846804154cb1fda8fb475c7e12ef5d17a3328b4f99e3ec97f433\": container with ID starting with 43e045f8d849846804154cb1fda8fb475c7e12ef5d17a3328b4f99e3ec97f433 not found: ID does not exist" containerID="43e045f8d849846804154cb1fda8fb475c7e12ef5d17a3328b4f99e3ec97f433" Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.245697 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43e045f8d849846804154cb1fda8fb475c7e12ef5d17a3328b4f99e3ec97f433"} err="failed to get container status \"43e045f8d849846804154cb1fda8fb475c7e12ef5d17a3328b4f99e3ec97f433\": rpc error: code = NotFound desc = could not find container \"43e045f8d849846804154cb1fda8fb475c7e12ef5d17a3328b4f99e3ec97f433\": container with ID starting with 43e045f8d849846804154cb1fda8fb475c7e12ef5d17a3328b4f99e3ec97f433 not found: ID does not exist" Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.245722 4520 scope.go:117] "RemoveContainer" containerID="5d399f358896d91393d236f63b604bde350dc5c8e3ef19d92cc3d285d1ad44a1" Jan 30 07:01:53 crc kubenswrapper[4520]: E0130 07:01:53.247104 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d399f358896d91393d236f63b604bde350dc5c8e3ef19d92cc3d285d1ad44a1\": container with ID starting with 5d399f358896d91393d236f63b604bde350dc5c8e3ef19d92cc3d285d1ad44a1 not found: ID does not exist" containerID="5d399f358896d91393d236f63b604bde350dc5c8e3ef19d92cc3d285d1ad44a1" Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.247143 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d399f358896d91393d236f63b604bde350dc5c8e3ef19d92cc3d285d1ad44a1"} err="failed to get container status \"5d399f358896d91393d236f63b604bde350dc5c8e3ef19d92cc3d285d1ad44a1\": rpc error: code = NotFound desc = could not find container \"5d399f358896d91393d236f63b604bde350dc5c8e3ef19d92cc3d285d1ad44a1\": container with ID starting with 5d399f358896d91393d236f63b604bde350dc5c8e3ef19d92cc3d285d1ad44a1 not found: ID does not exist" Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.298809 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5aad2c74-01f1-4dd2-95b4-5e4299adcb99" (UID: "5aad2c74-01f1-4dd2-95b4-5e4299adcb99"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.351407 4520 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5aad2c74-01f1-4dd2-95b4-5e4299adcb99-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.471455 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-bf4fcb464-scxkz"] Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.503984 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-bf4fcb464-scxkz"] Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.894128 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7c4c8c7bb-pfwmd"] Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.894784 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-7c4c8c7bb-pfwmd" podUID="2c99ef8b-2ef2-4e57-996c-d74afbaa161e" containerName="heat-cfnapi" containerID="cri-o://512ec0cfd4c2aa6ff9b71fad6954f1bf869d19bc298f81792cad52578cc47ac2" gracePeriod=60 Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.906635 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-66664cb669-j765l"] Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.933380 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-7c4c8c7bb-pfwmd" podUID="2c99ef8b-2ef2-4e57-996c-d74afbaa161e" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.188:8000/healthcheck\": EOF" Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.960621 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-fc44c7cd8-mhtpx"] Jan 30 07:01:53 crc kubenswrapper[4520]: E0130 07:01:53.961139 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aad2c74-01f1-4dd2-95b4-5e4299adcb99" containerName="placement-api" Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.961157 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aad2c74-01f1-4dd2-95b4-5e4299adcb99" containerName="placement-api" Jan 30 07:01:53 crc kubenswrapper[4520]: E0130 07:01:53.961177 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3380703e-5659-4040-8b43-e3ada0eaa6b6" containerName="horizon" Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.961183 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="3380703e-5659-4040-8b43-e3ada0eaa6b6" containerName="horizon" Jan 30 07:01:53 crc kubenswrapper[4520]: E0130 07:01:53.961194 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aad2c74-01f1-4dd2-95b4-5e4299adcb99" containerName="placement-log" Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.961199 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aad2c74-01f1-4dd2-95b4-5e4299adcb99" containerName="placement-log" Jan 30 07:01:53 crc kubenswrapper[4520]: E0130 07:01:53.961211 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3380703e-5659-4040-8b43-e3ada0eaa6b6" containerName="horizon-log" Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.961216 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="3380703e-5659-4040-8b43-e3ada0eaa6b6" containerName="horizon-log" Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.961383 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="5aad2c74-01f1-4dd2-95b4-5e4299adcb99" containerName="placement-log" Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.961398 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="3380703e-5659-4040-8b43-e3ada0eaa6b6" containerName="horizon-log" Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.961407 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="5aad2c74-01f1-4dd2-95b4-5e4299adcb99" containerName="placement-api" Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.961418 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="3380703e-5659-4040-8b43-e3ada0eaa6b6" containerName="horizon" Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.962184 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-fc44c7cd8-mhtpx" Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.971156 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-97977bbb9-v5xms"] Jan 30 07:01:53 crc kubenswrapper[4520]: I0130 07:01:53.972832 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-97977bbb9-v5xms" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.014026 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.014837 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.014968 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.015074 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.034378 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-fc44c7cd8-mhtpx"] Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.067990 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-97977bbb9-v5xms"] Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.073178 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac4df919-f28d-4178-a68c-943478f61669-public-tls-certs\") pod \"heat-cfnapi-fc44c7cd8-mhtpx\" (UID: \"ac4df919-f28d-4178-a68c-943478f61669\") " pod="openstack/heat-cfnapi-fc44c7cd8-mhtpx" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.073219 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4vmw\" (UniqueName: \"kubernetes.io/projected/ac4df919-f28d-4178-a68c-943478f61669-kube-api-access-s4vmw\") pod \"heat-cfnapi-fc44c7cd8-mhtpx\" (UID: \"ac4df919-f28d-4178-a68c-943478f61669\") " pod="openstack/heat-cfnapi-fc44c7cd8-mhtpx" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.073273 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac4df919-f28d-4178-a68c-943478f61669-internal-tls-certs\") pod \"heat-cfnapi-fc44c7cd8-mhtpx\" (UID: \"ac4df919-f28d-4178-a68c-943478f61669\") " pod="openstack/heat-cfnapi-fc44c7cd8-mhtpx" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.073292 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac4df919-f28d-4178-a68c-943478f61669-combined-ca-bundle\") pod \"heat-cfnapi-fc44c7cd8-mhtpx\" (UID: \"ac4df919-f28d-4178-a68c-943478f61669\") " pod="openstack/heat-cfnapi-fc44c7cd8-mhtpx" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.073381 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ac4df919-f28d-4178-a68c-943478f61669-config-data-custom\") pod \"heat-cfnapi-fc44c7cd8-mhtpx\" (UID: \"ac4df919-f28d-4178-a68c-943478f61669\") " pod="openstack/heat-cfnapi-fc44c7cd8-mhtpx" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.073438 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac4df919-f28d-4178-a68c-943478f61669-config-data\") pod \"heat-cfnapi-fc44c7cd8-mhtpx\" (UID: \"ac4df919-f28d-4178-a68c-943478f61669\") " pod="openstack/heat-cfnapi-fc44c7cd8-mhtpx" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.151262 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6b7486bc6d-lhplk" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.165909 4520 generic.go:334] "Generic (PLEG): container finished" podID="2541934c-0c62-4b72-b405-bfc672fc5568" containerID="9eb896f1ea30dfa0588c510e5a206809aeaf9286c169bbabcce5066533d7ac90" exitCode=1 Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.165969 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-dc9bfd46d-rs8m5" event={"ID":"2541934c-0c62-4b72-b405-bfc672fc5568","Type":"ContainerDied","Data":"9eb896f1ea30dfa0588c510e5a206809aeaf9286c169bbabcce5066533d7ac90"} Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.166003 4520 scope.go:117] "RemoveContainer" containerID="80490cbef70f2712e268e2b20558e2a24aebb253079cbcdfc87dcd5ab2be5fa6" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.166347 4520 scope.go:117] "RemoveContainer" containerID="9eb896f1ea30dfa0588c510e5a206809aeaf9286c169bbabcce5066533d7ac90" Jan 30 07:01:54 crc kubenswrapper[4520]: E0130 07:01:54.166565 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-dc9bfd46d-rs8m5_openstack(2541934c-0c62-4b72-b405-bfc672fc5568)\"" pod="openstack/heat-cfnapi-dc9bfd46d-rs8m5" podUID="2541934c-0c62-4b72-b405-bfc672fc5568" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.193782 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ac4df919-f28d-4178-a68c-943478f61669-config-data-custom\") pod \"heat-cfnapi-fc44c7cd8-mhtpx\" (UID: \"ac4df919-f28d-4178-a68c-943478f61669\") " pod="openstack/heat-cfnapi-fc44c7cd8-mhtpx" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.193823 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac4df919-f28d-4178-a68c-943478f61669-config-data\") pod \"heat-cfnapi-fc44c7cd8-mhtpx\" (UID: \"ac4df919-f28d-4178-a68c-943478f61669\") " pod="openstack/heat-cfnapi-fc44c7cd8-mhtpx" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.193855 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d9a5ebee-81c4-4354-a8fe-800820e313d5-config-data-custom\") pod \"heat-api-97977bbb9-v5xms\" (UID: \"d9a5ebee-81c4-4354-a8fe-800820e313d5\") " pod="openstack/heat-api-97977bbb9-v5xms" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.194772 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xdbk\" (UniqueName: \"kubernetes.io/projected/d9a5ebee-81c4-4354-a8fe-800820e313d5-kube-api-access-6xdbk\") pod \"heat-api-97977bbb9-v5xms\" (UID: \"d9a5ebee-81c4-4354-a8fe-800820e313d5\") " pod="openstack/heat-api-97977bbb9-v5xms" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.194963 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9a5ebee-81c4-4354-a8fe-800820e313d5-combined-ca-bundle\") pod \"heat-api-97977bbb9-v5xms\" (UID: \"d9a5ebee-81c4-4354-a8fe-800820e313d5\") " pod="openstack/heat-api-97977bbb9-v5xms" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.194998 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9a5ebee-81c4-4354-a8fe-800820e313d5-public-tls-certs\") pod \"heat-api-97977bbb9-v5xms\" (UID: \"d9a5ebee-81c4-4354-a8fe-800820e313d5\") " pod="openstack/heat-api-97977bbb9-v5xms" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.195050 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac4df919-f28d-4178-a68c-943478f61669-public-tls-certs\") pod \"heat-cfnapi-fc44c7cd8-mhtpx\" (UID: \"ac4df919-f28d-4178-a68c-943478f61669\") " pod="openstack/heat-cfnapi-fc44c7cd8-mhtpx" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.195076 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4vmw\" (UniqueName: \"kubernetes.io/projected/ac4df919-f28d-4178-a68c-943478f61669-kube-api-access-s4vmw\") pod \"heat-cfnapi-fc44c7cd8-mhtpx\" (UID: \"ac4df919-f28d-4178-a68c-943478f61669\") " pod="openstack/heat-cfnapi-fc44c7cd8-mhtpx" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.195123 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac4df919-f28d-4178-a68c-943478f61669-internal-tls-certs\") pod \"heat-cfnapi-fc44c7cd8-mhtpx\" (UID: \"ac4df919-f28d-4178-a68c-943478f61669\") " pod="openstack/heat-cfnapi-fc44c7cd8-mhtpx" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.195141 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac4df919-f28d-4178-a68c-943478f61669-combined-ca-bundle\") pod \"heat-cfnapi-fc44c7cd8-mhtpx\" (UID: \"ac4df919-f28d-4178-a68c-943478f61669\") " pod="openstack/heat-cfnapi-fc44c7cd8-mhtpx" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.195166 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9a5ebee-81c4-4354-a8fe-800820e313d5-internal-tls-certs\") pod \"heat-api-97977bbb9-v5xms\" (UID: \"d9a5ebee-81c4-4354-a8fe-800820e313d5\") " pod="openstack/heat-api-97977bbb9-v5xms" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.195185 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9a5ebee-81c4-4354-a8fe-800820e313d5-config-data\") pod \"heat-api-97977bbb9-v5xms\" (UID: \"d9a5ebee-81c4-4354-a8fe-800820e313d5\") " pod="openstack/heat-api-97977bbb9-v5xms" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.197287 4520 generic.go:334] "Generic (PLEG): container finished" podID="015102e7-4492-43ab-b32c-8938836fc162" containerID="ada3f181e1cdeebfe2b3aae79f5f524017ab9e7f3a2501db42e524c2ff912948" exitCode=1 Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.197319 4520 generic.go:334] "Generic (PLEG): container finished" podID="015102e7-4492-43ab-b32c-8938836fc162" containerID="1d6d04a10943b51a0cebea325ad521b8506c9c8595442da6368b9ebf9a52f8d1" exitCode=1 Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.197473 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-66664cb669-j765l" podUID="2ddf81a9-672a-457d-a233-087d73b4890a" containerName="heat-api" containerID="cri-o://f18ad98637686cac8c96aa757e383fb6eab069e5958ca7184a1ce856a37251cd" gracePeriod=60 Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.197577 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8b6685bb8-85zvh" event={"ID":"015102e7-4492-43ab-b32c-8938836fc162","Type":"ContainerDied","Data":"ada3f181e1cdeebfe2b3aae79f5f524017ab9e7f3a2501db42e524c2ff912948"} Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.197616 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8b6685bb8-85zvh" event={"ID":"015102e7-4492-43ab-b32c-8938836fc162","Type":"ContainerDied","Data":"1d6d04a10943b51a0cebea325ad521b8506c9c8595442da6368b9ebf9a52f8d1"} Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.198000 4520 scope.go:117] "RemoveContainer" containerID="1d6d04a10943b51a0cebea325ad521b8506c9c8595442da6368b9ebf9a52f8d1" Jan 30 07:01:54 crc kubenswrapper[4520]: E0130 07:01:54.198221 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-8b6685bb8-85zvh_openstack(015102e7-4492-43ab-b32c-8938836fc162)\"" pod="openstack/heat-api-8b6685bb8-85zvh" podUID="015102e7-4492-43ab-b32c-8938836fc162" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.219159 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac4df919-f28d-4178-a68c-943478f61669-internal-tls-certs\") pod \"heat-cfnapi-fc44c7cd8-mhtpx\" (UID: \"ac4df919-f28d-4178-a68c-943478f61669\") " pod="openstack/heat-cfnapi-fc44c7cd8-mhtpx" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.219705 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac4df919-f28d-4178-a68c-943478f61669-combined-ca-bundle\") pod \"heat-cfnapi-fc44c7cd8-mhtpx\" (UID: \"ac4df919-f28d-4178-a68c-943478f61669\") " pod="openstack/heat-cfnapi-fc44c7cd8-mhtpx" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.220131 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac4df919-f28d-4178-a68c-943478f61669-config-data\") pod \"heat-cfnapi-fc44c7cd8-mhtpx\" (UID: \"ac4df919-f28d-4178-a68c-943478f61669\") " pod="openstack/heat-cfnapi-fc44c7cd8-mhtpx" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.226148 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ac4df919-f28d-4178-a68c-943478f61669-config-data-custom\") pod \"heat-cfnapi-fc44c7cd8-mhtpx\" (UID: \"ac4df919-f28d-4178-a68c-943478f61669\") " pod="openstack/heat-cfnapi-fc44c7cd8-mhtpx" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.230066 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac4df919-f28d-4178-a68c-943478f61669-public-tls-certs\") pod \"heat-cfnapi-fc44c7cd8-mhtpx\" (UID: \"ac4df919-f28d-4178-a68c-943478f61669\") " pod="openstack/heat-cfnapi-fc44c7cd8-mhtpx" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.244199 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4vmw\" (UniqueName: \"kubernetes.io/projected/ac4df919-f28d-4178-a68c-943478f61669-kube-api-access-s4vmw\") pod \"heat-cfnapi-fc44c7cd8-mhtpx\" (UID: \"ac4df919-f28d-4178-a68c-943478f61669\") " pod="openstack/heat-cfnapi-fc44c7cd8-mhtpx" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.281390 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-fc44c7cd8-mhtpx" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.297117 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9a5ebee-81c4-4354-a8fe-800820e313d5-combined-ca-bundle\") pod \"heat-api-97977bbb9-v5xms\" (UID: \"d9a5ebee-81c4-4354-a8fe-800820e313d5\") " pod="openstack/heat-api-97977bbb9-v5xms" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.297175 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9a5ebee-81c4-4354-a8fe-800820e313d5-public-tls-certs\") pod \"heat-api-97977bbb9-v5xms\" (UID: \"d9a5ebee-81c4-4354-a8fe-800820e313d5\") " pod="openstack/heat-api-97977bbb9-v5xms" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.297226 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9a5ebee-81c4-4354-a8fe-800820e313d5-internal-tls-certs\") pod \"heat-api-97977bbb9-v5xms\" (UID: \"d9a5ebee-81c4-4354-a8fe-800820e313d5\") " pod="openstack/heat-api-97977bbb9-v5xms" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.297246 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9a5ebee-81c4-4354-a8fe-800820e313d5-config-data\") pod \"heat-api-97977bbb9-v5xms\" (UID: \"d9a5ebee-81c4-4354-a8fe-800820e313d5\") " pod="openstack/heat-api-97977bbb9-v5xms" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.297327 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d9a5ebee-81c4-4354-a8fe-800820e313d5-config-data-custom\") pod \"heat-api-97977bbb9-v5xms\" (UID: \"d9a5ebee-81c4-4354-a8fe-800820e313d5\") " pod="openstack/heat-api-97977bbb9-v5xms" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.297455 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xdbk\" (UniqueName: \"kubernetes.io/projected/d9a5ebee-81c4-4354-a8fe-800820e313d5-kube-api-access-6xdbk\") pod \"heat-api-97977bbb9-v5xms\" (UID: \"d9a5ebee-81c4-4354-a8fe-800820e313d5\") " pod="openstack/heat-api-97977bbb9-v5xms" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.318287 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d9a5ebee-81c4-4354-a8fe-800820e313d5-config-data-custom\") pod \"heat-api-97977bbb9-v5xms\" (UID: \"d9a5ebee-81c4-4354-a8fe-800820e313d5\") " pod="openstack/heat-api-97977bbb9-v5xms" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.321811 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9a5ebee-81c4-4354-a8fe-800820e313d5-config-data\") pod \"heat-api-97977bbb9-v5xms\" (UID: \"d9a5ebee-81c4-4354-a8fe-800820e313d5\") " pod="openstack/heat-api-97977bbb9-v5xms" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.323310 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9a5ebee-81c4-4354-a8fe-800820e313d5-combined-ca-bundle\") pod \"heat-api-97977bbb9-v5xms\" (UID: \"d9a5ebee-81c4-4354-a8fe-800820e313d5\") " pod="openstack/heat-api-97977bbb9-v5xms" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.327250 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9a5ebee-81c4-4354-a8fe-800820e313d5-public-tls-certs\") pod \"heat-api-97977bbb9-v5xms\" (UID: \"d9a5ebee-81c4-4354-a8fe-800820e313d5\") " pod="openstack/heat-api-97977bbb9-v5xms" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.327808 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9a5ebee-81c4-4354-a8fe-800820e313d5-internal-tls-certs\") pod \"heat-api-97977bbb9-v5xms\" (UID: \"d9a5ebee-81c4-4354-a8fe-800820e313d5\") " pod="openstack/heat-api-97977bbb9-v5xms" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.345747 4520 scope.go:117] "RemoveContainer" containerID="ada3f181e1cdeebfe2b3aae79f5f524017ab9e7f3a2501db42e524c2ff912948" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.347660 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-65948bc6c-vwm6m" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.356394 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xdbk\" (UniqueName: \"kubernetes.io/projected/d9a5ebee-81c4-4354-a8fe-800820e313d5-kube-api-access-6xdbk\") pod \"heat-api-97977bbb9-v5xms\" (UID: \"d9a5ebee-81c4-4354-a8fe-800820e313d5\") " pod="openstack/heat-api-97977bbb9-v5xms" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.393723 4520 scope.go:117] "RemoveContainer" containerID="ada3f181e1cdeebfe2b3aae79f5f524017ab9e7f3a2501db42e524c2ff912948" Jan 30 07:01:54 crc kubenswrapper[4520]: E0130 07:01:54.397589 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ada3f181e1cdeebfe2b3aae79f5f524017ab9e7f3a2501db42e524c2ff912948\": container with ID starting with ada3f181e1cdeebfe2b3aae79f5f524017ab9e7f3a2501db42e524c2ff912948 not found: ID does not exist" containerID="ada3f181e1cdeebfe2b3aae79f5f524017ab9e7f3a2501db42e524c2ff912948" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.397622 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ada3f181e1cdeebfe2b3aae79f5f524017ab9e7f3a2501db42e524c2ff912948"} err="failed to get container status \"ada3f181e1cdeebfe2b3aae79f5f524017ab9e7f3a2501db42e524c2ff912948\": rpc error: code = NotFound desc = could not find container \"ada3f181e1cdeebfe2b3aae79f5f524017ab9e7f3a2501db42e524c2ff912948\": container with ID starting with ada3f181e1cdeebfe2b3aae79f5f524017ab9e7f3a2501db42e524c2ff912948 not found: ID does not exist" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.449703 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c548f5455-gc5z9"] Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.449954 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c548f5455-gc5z9" podUID="885d7c94-3859-4ab4-a1e1-203588ca6f3c" containerName="dnsmasq-dns" containerID="cri-o://398480d01d59f4275a291ca2bdf0d31b32ad1541ccbc7f43e8a5aaac5db2fcac" gracePeriod=10 Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.596416 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-97977bbb9-v5xms" Jan 30 07:01:54 crc kubenswrapper[4520]: I0130 07:01:54.715021 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5aad2c74-01f1-4dd2-95b4-5e4299adcb99" path="/var/lib/kubelet/pods/5aad2c74-01f1-4dd2-95b4-5e4299adcb99/volumes" Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.032863 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-fc44c7cd8-mhtpx"] Jan 30 07:01:55 crc kubenswrapper[4520]: W0130 07:01:55.055136 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac4df919_f28d_4178_a68c_943478f61669.slice/crio-7fdb36de936d4b18097139214dba0efdee6dcfd2355ba96bee406d1aec7a2a6b WatchSource:0}: Error finding container 7fdb36de936d4b18097139214dba0efdee6dcfd2355ba96bee406d1aec7a2a6b: Status 404 returned error can't find the container with id 7fdb36de936d4b18097139214dba0efdee6dcfd2355ba96bee406d1aec7a2a6b Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.208964 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c548f5455-gc5z9" Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.270358 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/885d7c94-3859-4ab4-a1e1-203588ca6f3c-ovsdbserver-nb\") pod \"885d7c94-3859-4ab4-a1e1-203588ca6f3c\" (UID: \"885d7c94-3859-4ab4-a1e1-203588ca6f3c\") " Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.270401 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/885d7c94-3859-4ab4-a1e1-203588ca6f3c-dns-svc\") pod \"885d7c94-3859-4ab4-a1e1-203588ca6f3c\" (UID: \"885d7c94-3859-4ab4-a1e1-203588ca6f3c\") " Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.270436 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/885d7c94-3859-4ab4-a1e1-203588ca6f3c-config\") pod \"885d7c94-3859-4ab4-a1e1-203588ca6f3c\" (UID: \"885d7c94-3859-4ab4-a1e1-203588ca6f3c\") " Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.270737 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/885d7c94-3859-4ab4-a1e1-203588ca6f3c-dns-swift-storage-0\") pod \"885d7c94-3859-4ab4-a1e1-203588ca6f3c\" (UID: \"885d7c94-3859-4ab4-a1e1-203588ca6f3c\") " Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.270852 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svcg9\" (UniqueName: \"kubernetes.io/projected/885d7c94-3859-4ab4-a1e1-203588ca6f3c-kube-api-access-svcg9\") pod \"885d7c94-3859-4ab4-a1e1-203588ca6f3c\" (UID: \"885d7c94-3859-4ab4-a1e1-203588ca6f3c\") " Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.270867 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/885d7c94-3859-4ab4-a1e1-203588ca6f3c-ovsdbserver-sb\") pod \"885d7c94-3859-4ab4-a1e1-203588ca6f3c\" (UID: \"885d7c94-3859-4ab4-a1e1-203588ca6f3c\") " Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.287706 4520 scope.go:117] "RemoveContainer" containerID="9eb896f1ea30dfa0588c510e5a206809aeaf9286c169bbabcce5066533d7ac90" Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.289426 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/885d7c94-3859-4ab4-a1e1-203588ca6f3c-kube-api-access-svcg9" (OuterVolumeSpecName: "kube-api-access-svcg9") pod "885d7c94-3859-4ab4-a1e1-203588ca6f3c" (UID: "885d7c94-3859-4ab4-a1e1-203588ca6f3c"). InnerVolumeSpecName "kube-api-access-svcg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.302879 4520 scope.go:117] "RemoveContainer" containerID="1d6d04a10943b51a0cebea325ad521b8506c9c8595442da6368b9ebf9a52f8d1" Jan 30 07:01:55 crc kubenswrapper[4520]: E0130 07:01:55.305265 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-dc9bfd46d-rs8m5_openstack(2541934c-0c62-4b72-b405-bfc672fc5568)\"" pod="openstack/heat-cfnapi-dc9bfd46d-rs8m5" podUID="2541934c-0c62-4b72-b405-bfc672fc5568" Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.311633 4520 generic.go:334] "Generic (PLEG): container finished" podID="885d7c94-3859-4ab4-a1e1-203588ca6f3c" containerID="398480d01d59f4275a291ca2bdf0d31b32ad1541ccbc7f43e8a5aaac5db2fcac" exitCode=0 Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.311732 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c548f5455-gc5z9" event={"ID":"885d7c94-3859-4ab4-a1e1-203588ca6f3c","Type":"ContainerDied","Data":"398480d01d59f4275a291ca2bdf0d31b32ad1541ccbc7f43e8a5aaac5db2fcac"} Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.311767 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c548f5455-gc5z9" event={"ID":"885d7c94-3859-4ab4-a1e1-203588ca6f3c","Type":"ContainerDied","Data":"fb8dcafdc933684e5505b91db15cad05c8816c935ee2bf8a69edd90284d45669"} Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.311786 4520 scope.go:117] "RemoveContainer" containerID="398480d01d59f4275a291ca2bdf0d31b32ad1541ccbc7f43e8a5aaac5db2fcac" Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.311925 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c548f5455-gc5z9" Jan 30 07:01:55 crc kubenswrapper[4520]: E0130 07:01:55.320570 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-8b6685bb8-85zvh_openstack(015102e7-4492-43ab-b32c-8938836fc162)\"" pod="openstack/heat-api-8b6685bb8-85zvh" podUID="015102e7-4492-43ab-b32c-8938836fc162" Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.332715 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-97977bbb9-v5xms"] Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.334163 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-fc44c7cd8-mhtpx" event={"ID":"ac4df919-f28d-4178-a68c-943478f61669","Type":"ContainerStarted","Data":"7fdb36de936d4b18097139214dba0efdee6dcfd2355ba96bee406d1aec7a2a6b"} Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.383307 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svcg9\" (UniqueName: \"kubernetes.io/projected/885d7c94-3859-4ab4-a1e1-203588ca6f3c-kube-api-access-svcg9\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.395773 4520 scope.go:117] "RemoveContainer" containerID="dc83f2c670db04565c276722e08f2334797b451eee1d26537724198d6b24b763" Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.443407 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/885d7c94-3859-4ab4-a1e1-203588ca6f3c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "885d7c94-3859-4ab4-a1e1-203588ca6f3c" (UID: "885d7c94-3859-4ab4-a1e1-203588ca6f3c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.460273 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/885d7c94-3859-4ab4-a1e1-203588ca6f3c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "885d7c94-3859-4ab4-a1e1-203588ca6f3c" (UID: "885d7c94-3859-4ab4-a1e1-203588ca6f3c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.466690 4520 scope.go:117] "RemoveContainer" containerID="398480d01d59f4275a291ca2bdf0d31b32ad1541ccbc7f43e8a5aaac5db2fcac" Jan 30 07:01:55 crc kubenswrapper[4520]: E0130 07:01:55.472625 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"398480d01d59f4275a291ca2bdf0d31b32ad1541ccbc7f43e8a5aaac5db2fcac\": container with ID starting with 398480d01d59f4275a291ca2bdf0d31b32ad1541ccbc7f43e8a5aaac5db2fcac not found: ID does not exist" containerID="398480d01d59f4275a291ca2bdf0d31b32ad1541ccbc7f43e8a5aaac5db2fcac" Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.472673 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"398480d01d59f4275a291ca2bdf0d31b32ad1541ccbc7f43e8a5aaac5db2fcac"} err="failed to get container status \"398480d01d59f4275a291ca2bdf0d31b32ad1541ccbc7f43e8a5aaac5db2fcac\": rpc error: code = NotFound desc = could not find container \"398480d01d59f4275a291ca2bdf0d31b32ad1541ccbc7f43e8a5aaac5db2fcac\": container with ID starting with 398480d01d59f4275a291ca2bdf0d31b32ad1541ccbc7f43e8a5aaac5db2fcac not found: ID does not exist" Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.472701 4520 scope.go:117] "RemoveContainer" containerID="dc83f2c670db04565c276722e08f2334797b451eee1d26537724198d6b24b763" Jan 30 07:01:55 crc kubenswrapper[4520]: E0130 07:01:55.473843 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc83f2c670db04565c276722e08f2334797b451eee1d26537724198d6b24b763\": container with ID starting with dc83f2c670db04565c276722e08f2334797b451eee1d26537724198d6b24b763 not found: ID does not exist" containerID="dc83f2c670db04565c276722e08f2334797b451eee1d26537724198d6b24b763" Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.473872 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc83f2c670db04565c276722e08f2334797b451eee1d26537724198d6b24b763"} err="failed to get container status \"dc83f2c670db04565c276722e08f2334797b451eee1d26537724198d6b24b763\": rpc error: code = NotFound desc = could not find container \"dc83f2c670db04565c276722e08f2334797b451eee1d26537724198d6b24b763\": container with ID starting with dc83f2c670db04565c276722e08f2334797b451eee1d26537724198d6b24b763 not found: ID does not exist" Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.488800 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/885d7c94-3859-4ab4-a1e1-203588ca6f3c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "885d7c94-3859-4ab4-a1e1-203588ca6f3c" (UID: "885d7c94-3859-4ab4-a1e1-203588ca6f3c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.491874 4520 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/885d7c94-3859-4ab4-a1e1-203588ca6f3c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.491901 4520 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/885d7c94-3859-4ab4-a1e1-203588ca6f3c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.491913 4520 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/885d7c94-3859-4ab4-a1e1-203588ca6f3c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.494686 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/885d7c94-3859-4ab4-a1e1-203588ca6f3c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "885d7c94-3859-4ab4-a1e1-203588ca6f3c" (UID: "885d7c94-3859-4ab4-a1e1-203588ca6f3c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.514986 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7hthh" Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.515606 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7hthh" Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.536387 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/885d7c94-3859-4ab4-a1e1-203588ca6f3c-config" (OuterVolumeSpecName: "config") pod "885d7c94-3859-4ab4-a1e1-203588ca6f3c" (UID: "885d7c94-3859-4ab4-a1e1-203588ca6f3c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.594025 4520 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/885d7c94-3859-4ab4-a1e1-203588ca6f3c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.594064 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/885d7c94-3859-4ab4-a1e1-203588ca6f3c-config\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.666912 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c548f5455-gc5z9"] Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.679553 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c548f5455-gc5z9"] Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.980108 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-dc9bfd46d-rs8m5" Jan 30 07:01:55 crc kubenswrapper[4520]: I0130 07:01:55.980424 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-dc9bfd46d-rs8m5" Jan 30 07:01:56 crc kubenswrapper[4520]: I0130 07:01:56.061241 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-8b6685bb8-85zvh" Jan 30 07:01:56 crc kubenswrapper[4520]: I0130 07:01:56.061298 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-8b6685bb8-85zvh" Jan 30 07:01:56 crc kubenswrapper[4520]: I0130 07:01:56.294043 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-66664cb669-j765l" Jan 30 07:01:56 crc kubenswrapper[4520]: I0130 07:01:56.422781 4520 generic.go:334] "Generic (PLEG): container finished" podID="2ddf81a9-672a-457d-a233-087d73b4890a" containerID="f18ad98637686cac8c96aa757e383fb6eab069e5958ca7184a1ce856a37251cd" exitCode=0 Jan 30 07:01:56 crc kubenswrapper[4520]: I0130 07:01:56.422862 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-66664cb669-j765l" event={"ID":"2ddf81a9-672a-457d-a233-087d73b4890a","Type":"ContainerDied","Data":"f18ad98637686cac8c96aa757e383fb6eab069e5958ca7184a1ce856a37251cd"} Jan 30 07:01:56 crc kubenswrapper[4520]: I0130 07:01:56.422908 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-66664cb669-j765l" event={"ID":"2ddf81a9-672a-457d-a233-087d73b4890a","Type":"ContainerDied","Data":"a39eef770be28f66178eeb8a731b4dc4f69b4efcd4d3fc67897ab797017a40d8"} Jan 30 07:01:56 crc kubenswrapper[4520]: I0130 07:01:56.422936 4520 scope.go:117] "RemoveContainer" containerID="f18ad98637686cac8c96aa757e383fb6eab069e5958ca7184a1ce856a37251cd" Jan 30 07:01:56 crc kubenswrapper[4520]: I0130 07:01:56.423061 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-66664cb669-j765l" Jan 30 07:01:56 crc kubenswrapper[4520]: I0130 07:01:56.447182 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ddf81a9-672a-457d-a233-087d73b4890a-combined-ca-bundle\") pod \"2ddf81a9-672a-457d-a233-087d73b4890a\" (UID: \"2ddf81a9-672a-457d-a233-087d73b4890a\") " Jan 30 07:01:56 crc kubenswrapper[4520]: I0130 07:01:56.447230 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ddf81a9-672a-457d-a233-087d73b4890a-config-data\") pod \"2ddf81a9-672a-457d-a233-087d73b4890a\" (UID: \"2ddf81a9-672a-457d-a233-087d73b4890a\") " Jan 30 07:01:56 crc kubenswrapper[4520]: I0130 07:01:56.447419 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p67lf\" (UniqueName: \"kubernetes.io/projected/2ddf81a9-672a-457d-a233-087d73b4890a-kube-api-access-p67lf\") pod \"2ddf81a9-672a-457d-a233-087d73b4890a\" (UID: \"2ddf81a9-672a-457d-a233-087d73b4890a\") " Jan 30 07:01:56 crc kubenswrapper[4520]: I0130 07:01:56.447498 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ddf81a9-672a-457d-a233-087d73b4890a-config-data-custom\") pod \"2ddf81a9-672a-457d-a233-087d73b4890a\" (UID: \"2ddf81a9-672a-457d-a233-087d73b4890a\") " Jan 30 07:01:56 crc kubenswrapper[4520]: I0130 07:01:56.506299 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-97977bbb9-v5xms" event={"ID":"d9a5ebee-81c4-4354-a8fe-800820e313d5","Type":"ContainerStarted","Data":"822dcd13b80e7d49638b5749a3ce1a3930580c010b0da11038bb07d2d489c2c4"} Jan 30 07:01:56 crc kubenswrapper[4520]: I0130 07:01:56.506372 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-97977bbb9-v5xms" event={"ID":"d9a5ebee-81c4-4354-a8fe-800820e313d5","Type":"ContainerStarted","Data":"e88e4a3d1bef6301d554e011519928af8996ecb2d1356fc21236c15d5d883554"} Jan 30 07:01:56 crc kubenswrapper[4520]: I0130 07:01:56.506414 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-97977bbb9-v5xms" Jan 30 07:01:56 crc kubenswrapper[4520]: I0130 07:01:56.508713 4520 scope.go:117] "RemoveContainer" containerID="f18ad98637686cac8c96aa757e383fb6eab069e5958ca7184a1ce856a37251cd" Jan 30 07:01:56 crc kubenswrapper[4520]: E0130 07:01:56.520713 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f18ad98637686cac8c96aa757e383fb6eab069e5958ca7184a1ce856a37251cd\": container with ID starting with f18ad98637686cac8c96aa757e383fb6eab069e5958ca7184a1ce856a37251cd not found: ID does not exist" containerID="f18ad98637686cac8c96aa757e383fb6eab069e5958ca7184a1ce856a37251cd" Jan 30 07:01:56 crc kubenswrapper[4520]: I0130 07:01:56.521060 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f18ad98637686cac8c96aa757e383fb6eab069e5958ca7184a1ce856a37251cd"} err="failed to get container status \"f18ad98637686cac8c96aa757e383fb6eab069e5958ca7184a1ce856a37251cd\": rpc error: code = NotFound desc = could not find container \"f18ad98637686cac8c96aa757e383fb6eab069e5958ca7184a1ce856a37251cd\": container with ID starting with f18ad98637686cac8c96aa757e383fb6eab069e5958ca7184a1ce856a37251cd not found: ID does not exist" Jan 30 07:01:56 crc kubenswrapper[4520]: I0130 07:01:56.550718 4520 scope.go:117] "RemoveContainer" containerID="1d6d04a10943b51a0cebea325ad521b8506c9c8595442da6368b9ebf9a52f8d1" Jan 30 07:01:56 crc kubenswrapper[4520]: E0130 07:01:56.551041 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-8b6685bb8-85zvh_openstack(015102e7-4492-43ab-b32c-8938836fc162)\"" pod="openstack/heat-api-8b6685bb8-85zvh" podUID="015102e7-4492-43ab-b32c-8938836fc162" Jan 30 07:01:56 crc kubenswrapper[4520]: I0130 07:01:56.552061 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-fc44c7cd8-mhtpx" event={"ID":"ac4df919-f28d-4178-a68c-943478f61669","Type":"ContainerStarted","Data":"4075e3c8e8f086a99b3d881e14486467177ce2531fffb3a01e3affa773f00000"} Jan 30 07:01:56 crc kubenswrapper[4520]: I0130 07:01:56.552972 4520 scope.go:117] "RemoveContainer" containerID="9eb896f1ea30dfa0588c510e5a206809aeaf9286c169bbabcce5066533d7ac90" Jan 30 07:01:56 crc kubenswrapper[4520]: E0130 07:01:56.553184 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-dc9bfd46d-rs8m5_openstack(2541934c-0c62-4b72-b405-bfc672fc5568)\"" pod="openstack/heat-cfnapi-dc9bfd46d-rs8m5" podUID="2541934c-0c62-4b72-b405-bfc672fc5568" Jan 30 07:01:56 crc kubenswrapper[4520]: I0130 07:01:56.553197 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-fc44c7cd8-mhtpx" Jan 30 07:01:56 crc kubenswrapper[4520]: I0130 07:01:56.583242 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-7hthh" podUID="843b1d9d-26f2-42d5-b8ff-331b66efd5f8" containerName="registry-server" probeResult="failure" output=< Jan 30 07:01:56 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 07:01:56 crc kubenswrapper[4520]: > Jan 30 07:01:56 crc kubenswrapper[4520]: I0130 07:01:56.585537 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ddf81a9-672a-457d-a233-087d73b4890a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2ddf81a9-672a-457d-a233-087d73b4890a" (UID: "2ddf81a9-672a-457d-a233-087d73b4890a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:56 crc kubenswrapper[4520]: I0130 07:01:56.586670 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ddf81a9-672a-457d-a233-087d73b4890a-kube-api-access-p67lf" (OuterVolumeSpecName: "kube-api-access-p67lf") pod "2ddf81a9-672a-457d-a233-087d73b4890a" (UID: "2ddf81a9-672a-457d-a233-087d73b4890a"). InnerVolumeSpecName "kube-api-access-p67lf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:01:56 crc kubenswrapper[4520]: I0130 07:01:56.586796 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ddf81a9-672a-457d-a233-087d73b4890a-config-data" (OuterVolumeSpecName: "config-data") pod "2ddf81a9-672a-457d-a233-087d73b4890a" (UID: "2ddf81a9-672a-457d-a233-087d73b4890a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:56 crc kubenswrapper[4520]: I0130 07:01:56.588284 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ddf81a9-672a-457d-a233-087d73b4890a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ddf81a9-672a-457d-a233-087d73b4890a" (UID: "2ddf81a9-672a-457d-a233-087d73b4890a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:56 crc kubenswrapper[4520]: I0130 07:01:56.589623 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-97977bbb9-v5xms" podStartSLOduration=3.589604691 podStartE2EDuration="3.589604691s" podCreationTimestamp="2026-01-30 07:01:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:01:56.547112763 +0000 UTC m=+1030.175464944" watchObservedRunningTime="2026-01-30 07:01:56.589604691 +0000 UTC m=+1030.217956873" Jan 30 07:01:56 crc kubenswrapper[4520]: I0130 07:01:56.592042 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-fc44c7cd8-mhtpx" podStartSLOduration=3.5919231700000003 podStartE2EDuration="3.59192317s" podCreationTimestamp="2026-01-30 07:01:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:01:56.578680854 +0000 UTC m=+1030.207033035" watchObservedRunningTime="2026-01-30 07:01:56.59192317 +0000 UTC m=+1030.220275351" Jan 30 07:01:56 crc kubenswrapper[4520]: I0130 07:01:56.656579 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p67lf\" (UniqueName: \"kubernetes.io/projected/2ddf81a9-672a-457d-a233-087d73b4890a-kube-api-access-p67lf\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:56 crc kubenswrapper[4520]: I0130 07:01:56.656622 4520 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ddf81a9-672a-457d-a233-087d73b4890a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:56 crc kubenswrapper[4520]: I0130 07:01:56.656635 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ddf81a9-672a-457d-a233-087d73b4890a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:56 crc kubenswrapper[4520]: I0130 07:01:56.656658 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ddf81a9-672a-457d-a233-087d73b4890a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:56 crc kubenswrapper[4520]: I0130 07:01:56.721506 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="885d7c94-3859-4ab4-a1e1-203588ca6f3c" path="/var/lib/kubelet/pods/885d7c94-3859-4ab4-a1e1-203588ca6f3c/volumes" Jan 30 07:01:56 crc kubenswrapper[4520]: I0130 07:01:56.770302 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-66664cb669-j765l"] Jan 30 07:01:56 crc kubenswrapper[4520]: I0130 07:01:56.801455 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-66664cb669-j765l"] Jan 30 07:01:57 crc kubenswrapper[4520]: I0130 07:01:57.400446 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:01:57 crc kubenswrapper[4520]: I0130 07:01:57.400838 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9424be29-ccf4-449c-ad6a-dae1997dd5ab" containerName="ceilometer-central-agent" containerID="cri-o://a4ba56155730aa47005206621dbdbf22dc6eff2744b886b1ddf481ef64c91099" gracePeriod=30 Jan 30 07:01:57 crc kubenswrapper[4520]: I0130 07:01:57.401209 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9424be29-ccf4-449c-ad6a-dae1997dd5ab" containerName="proxy-httpd" containerID="cri-o://23266db8726f41f3525bfd8730816a6c7cb62872913468ab7cc524c262a4e89c" gracePeriod=30 Jan 30 07:01:57 crc kubenswrapper[4520]: I0130 07:01:57.401283 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9424be29-ccf4-449c-ad6a-dae1997dd5ab" containerName="ceilometer-notification-agent" containerID="cri-o://1ecf0861fabebb22b8a12d75877898faf2d5ce39be06cbc05afae3cadd820a5e" gracePeriod=30 Jan 30 07:01:57 crc kubenswrapper[4520]: I0130 07:01:57.401491 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9424be29-ccf4-449c-ad6a-dae1997dd5ab" containerName="sg-core" containerID="cri-o://1200d1a51eb59a07bd155e2cd066f1e8eaf9d811142a9383fa3598df70c08479" gracePeriod=30 Jan 30 07:01:57 crc kubenswrapper[4520]: I0130 07:01:57.580471 4520 generic.go:334] "Generic (PLEG): container finished" podID="9424be29-ccf4-449c-ad6a-dae1997dd5ab" containerID="1200d1a51eb59a07bd155e2cd066f1e8eaf9d811142a9383fa3598df70c08479" exitCode=2 Jan 30 07:01:57 crc kubenswrapper[4520]: I0130 07:01:57.580581 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9424be29-ccf4-449c-ad6a-dae1997dd5ab","Type":"ContainerDied","Data":"1200d1a51eb59a07bd155e2cd066f1e8eaf9d811142a9383fa3598df70c08479"} Jan 30 07:01:57 crc kubenswrapper[4520]: I0130 07:01:57.585695 4520 scope.go:117] "RemoveContainer" containerID="9eb896f1ea30dfa0588c510e5a206809aeaf9286c169bbabcce5066533d7ac90" Jan 30 07:01:57 crc kubenswrapper[4520]: E0130 07:01:57.585969 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-dc9bfd46d-rs8m5_openstack(2541934c-0c62-4b72-b405-bfc672fc5568)\"" pod="openstack/heat-cfnapi-dc9bfd46d-rs8m5" podUID="2541934c-0c62-4b72-b405-bfc672fc5568" Jan 30 07:01:57 crc kubenswrapper[4520]: I0130 07:01:57.760298 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lrqhk"] Jan 30 07:01:57 crc kubenswrapper[4520]: E0130 07:01:57.761398 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="885d7c94-3859-4ab4-a1e1-203588ca6f3c" containerName="init" Jan 30 07:01:57 crc kubenswrapper[4520]: I0130 07:01:57.761419 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="885d7c94-3859-4ab4-a1e1-203588ca6f3c" containerName="init" Jan 30 07:01:57 crc kubenswrapper[4520]: E0130 07:01:57.761455 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ddf81a9-672a-457d-a233-087d73b4890a" containerName="heat-api" Jan 30 07:01:57 crc kubenswrapper[4520]: I0130 07:01:57.761464 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ddf81a9-672a-457d-a233-087d73b4890a" containerName="heat-api" Jan 30 07:01:57 crc kubenswrapper[4520]: E0130 07:01:57.761472 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="885d7c94-3859-4ab4-a1e1-203588ca6f3c" containerName="dnsmasq-dns" Jan 30 07:01:57 crc kubenswrapper[4520]: I0130 07:01:57.761479 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="885d7c94-3859-4ab4-a1e1-203588ca6f3c" containerName="dnsmasq-dns" Jan 30 07:01:57 crc kubenswrapper[4520]: I0130 07:01:57.761725 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="885d7c94-3859-4ab4-a1e1-203588ca6f3c" containerName="dnsmasq-dns" Jan 30 07:01:57 crc kubenswrapper[4520]: I0130 07:01:57.761757 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ddf81a9-672a-457d-a233-087d73b4890a" containerName="heat-api" Jan 30 07:01:57 crc kubenswrapper[4520]: I0130 07:01:57.763476 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lrqhk" Jan 30 07:01:57 crc kubenswrapper[4520]: I0130 07:01:57.793992 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 07:01:57 crc kubenswrapper[4520]: I0130 07:01:57.794851 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 07:01:57 crc kubenswrapper[4520]: I0130 07:01:57.808322 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lrqhk"] Jan 30 07:01:57 crc kubenswrapper[4520]: I0130 07:01:57.892260 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a58f46-d0e7-4ca3-b01d-52758e84d242-utilities\") pod \"redhat-operators-lrqhk\" (UID: \"61a58f46-d0e7-4ca3-b01d-52758e84d242\") " pod="openshift-marketplace/redhat-operators-lrqhk" Jan 30 07:01:57 crc kubenswrapper[4520]: I0130 07:01:57.892352 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a58f46-d0e7-4ca3-b01d-52758e84d242-catalog-content\") pod \"redhat-operators-lrqhk\" (UID: \"61a58f46-d0e7-4ca3-b01d-52758e84d242\") " pod="openshift-marketplace/redhat-operators-lrqhk" Jan 30 07:01:57 crc kubenswrapper[4520]: I0130 07:01:57.892671 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szvhl\" (UniqueName: \"kubernetes.io/projected/61a58f46-d0e7-4ca3-b01d-52758e84d242-kube-api-access-szvhl\") pod \"redhat-operators-lrqhk\" (UID: \"61a58f46-d0e7-4ca3-b01d-52758e84d242\") " pod="openshift-marketplace/redhat-operators-lrqhk" Jan 30 07:01:57 crc kubenswrapper[4520]: I0130 07:01:57.994437 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a58f46-d0e7-4ca3-b01d-52758e84d242-utilities\") pod \"redhat-operators-lrqhk\" (UID: \"61a58f46-d0e7-4ca3-b01d-52758e84d242\") " pod="openshift-marketplace/redhat-operators-lrqhk" Jan 30 07:01:57 crc kubenswrapper[4520]: I0130 07:01:57.994501 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a58f46-d0e7-4ca3-b01d-52758e84d242-catalog-content\") pod \"redhat-operators-lrqhk\" (UID: \"61a58f46-d0e7-4ca3-b01d-52758e84d242\") " pod="openshift-marketplace/redhat-operators-lrqhk" Jan 30 07:01:57 crc kubenswrapper[4520]: I0130 07:01:57.994605 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szvhl\" (UniqueName: \"kubernetes.io/projected/61a58f46-d0e7-4ca3-b01d-52758e84d242-kube-api-access-szvhl\") pod \"redhat-operators-lrqhk\" (UID: \"61a58f46-d0e7-4ca3-b01d-52758e84d242\") " pod="openshift-marketplace/redhat-operators-lrqhk" Jan 30 07:01:57 crc kubenswrapper[4520]: I0130 07:01:57.995592 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a58f46-d0e7-4ca3-b01d-52758e84d242-utilities\") pod \"redhat-operators-lrqhk\" (UID: \"61a58f46-d0e7-4ca3-b01d-52758e84d242\") " pod="openshift-marketplace/redhat-operators-lrqhk" Jan 30 07:01:57 crc kubenswrapper[4520]: I0130 07:01:57.995862 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a58f46-d0e7-4ca3-b01d-52758e84d242-catalog-content\") pod \"redhat-operators-lrqhk\" (UID: \"61a58f46-d0e7-4ca3-b01d-52758e84d242\") " pod="openshift-marketplace/redhat-operators-lrqhk" Jan 30 07:01:58 crc kubenswrapper[4520]: I0130 07:01:58.021649 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szvhl\" (UniqueName: \"kubernetes.io/projected/61a58f46-d0e7-4ca3-b01d-52758e84d242-kube-api-access-szvhl\") pod \"redhat-operators-lrqhk\" (UID: \"61a58f46-d0e7-4ca3-b01d-52758e84d242\") " pod="openshift-marketplace/redhat-operators-lrqhk" Jan 30 07:01:58 crc kubenswrapper[4520]: I0130 07:01:58.094985 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lrqhk" Jan 30 07:01:58 crc kubenswrapper[4520]: I0130 07:01:58.625992 4520 generic.go:334] "Generic (PLEG): container finished" podID="9424be29-ccf4-449c-ad6a-dae1997dd5ab" containerID="23266db8726f41f3525bfd8730816a6c7cb62872913468ab7cc524c262a4e89c" exitCode=0 Jan 30 07:01:58 crc kubenswrapper[4520]: I0130 07:01:58.626552 4520 generic.go:334] "Generic (PLEG): container finished" podID="9424be29-ccf4-449c-ad6a-dae1997dd5ab" containerID="1ecf0861fabebb22b8a12d75877898faf2d5ce39be06cbc05afae3cadd820a5e" exitCode=0 Jan 30 07:01:58 crc kubenswrapper[4520]: I0130 07:01:58.626567 4520 generic.go:334] "Generic (PLEG): container finished" podID="9424be29-ccf4-449c-ad6a-dae1997dd5ab" containerID="a4ba56155730aa47005206621dbdbf22dc6eff2744b886b1ddf481ef64c91099" exitCode=0 Jan 30 07:01:58 crc kubenswrapper[4520]: I0130 07:01:58.627789 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9424be29-ccf4-449c-ad6a-dae1997dd5ab","Type":"ContainerDied","Data":"23266db8726f41f3525bfd8730816a6c7cb62872913468ab7cc524c262a4e89c"} Jan 30 07:01:58 crc kubenswrapper[4520]: I0130 07:01:58.627880 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9424be29-ccf4-449c-ad6a-dae1997dd5ab","Type":"ContainerDied","Data":"1ecf0861fabebb22b8a12d75877898faf2d5ce39be06cbc05afae3cadd820a5e"} Jan 30 07:01:58 crc kubenswrapper[4520]: I0130 07:01:58.627895 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9424be29-ccf4-449c-ad6a-dae1997dd5ab","Type":"ContainerDied","Data":"a4ba56155730aa47005206621dbdbf22dc6eff2744b886b1ddf481ef64c91099"} Jan 30 07:01:58 crc kubenswrapper[4520]: I0130 07:01:58.735127 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ddf81a9-672a-457d-a233-087d73b4890a" path="/var/lib/kubelet/pods/2ddf81a9-672a-457d-a233-087d73b4890a/volumes" Jan 30 07:01:58 crc kubenswrapper[4520]: I0130 07:01:58.772404 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lrqhk"] Jan 30 07:01:58 crc kubenswrapper[4520]: W0130 07:01:58.788487 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61a58f46_d0e7_4ca3_b01d_52758e84d242.slice/crio-1acd1cedb41b23acacdbe8c8eb111bf96cc8bfa3a7a7b62554a2c5147bd92b48 WatchSource:0}: Error finding container 1acd1cedb41b23acacdbe8c8eb111bf96cc8bfa3a7a7b62554a2c5147bd92b48: Status 404 returned error can't find the container with id 1acd1cedb41b23acacdbe8c8eb111bf96cc8bfa3a7a7b62554a2c5147bd92b48 Jan 30 07:01:58 crc kubenswrapper[4520]: I0130 07:01:58.888743 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 07:01:58 crc kubenswrapper[4520]: I0130 07:01:58.940634 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9424be29-ccf4-449c-ad6a-dae1997dd5ab-log-httpd\") pod \"9424be29-ccf4-449c-ad6a-dae1997dd5ab\" (UID: \"9424be29-ccf4-449c-ad6a-dae1997dd5ab\") " Jan 30 07:01:58 crc kubenswrapper[4520]: I0130 07:01:58.940769 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9424be29-ccf4-449c-ad6a-dae1997dd5ab-scripts\") pod \"9424be29-ccf4-449c-ad6a-dae1997dd5ab\" (UID: \"9424be29-ccf4-449c-ad6a-dae1997dd5ab\") " Jan 30 07:01:58 crc kubenswrapper[4520]: I0130 07:01:58.940863 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9424be29-ccf4-449c-ad6a-dae1997dd5ab-sg-core-conf-yaml\") pod \"9424be29-ccf4-449c-ad6a-dae1997dd5ab\" (UID: \"9424be29-ccf4-449c-ad6a-dae1997dd5ab\") " Jan 30 07:01:58 crc kubenswrapper[4520]: I0130 07:01:58.940893 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtlpz\" (UniqueName: \"kubernetes.io/projected/9424be29-ccf4-449c-ad6a-dae1997dd5ab-kube-api-access-wtlpz\") pod \"9424be29-ccf4-449c-ad6a-dae1997dd5ab\" (UID: \"9424be29-ccf4-449c-ad6a-dae1997dd5ab\") " Jan 30 07:01:58 crc kubenswrapper[4520]: I0130 07:01:58.941009 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9424be29-ccf4-449c-ad6a-dae1997dd5ab-combined-ca-bundle\") pod \"9424be29-ccf4-449c-ad6a-dae1997dd5ab\" (UID: \"9424be29-ccf4-449c-ad6a-dae1997dd5ab\") " Jan 30 07:01:58 crc kubenswrapper[4520]: I0130 07:01:58.941063 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9424be29-ccf4-449c-ad6a-dae1997dd5ab-run-httpd\") pod \"9424be29-ccf4-449c-ad6a-dae1997dd5ab\" (UID: \"9424be29-ccf4-449c-ad6a-dae1997dd5ab\") " Jan 30 07:01:58 crc kubenswrapper[4520]: I0130 07:01:58.941107 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9424be29-ccf4-449c-ad6a-dae1997dd5ab-config-data\") pod \"9424be29-ccf4-449c-ad6a-dae1997dd5ab\" (UID: \"9424be29-ccf4-449c-ad6a-dae1997dd5ab\") " Jan 30 07:01:58 crc kubenswrapper[4520]: I0130 07:01:58.946130 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9424be29-ccf4-449c-ad6a-dae1997dd5ab-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9424be29-ccf4-449c-ad6a-dae1997dd5ab" (UID: "9424be29-ccf4-449c-ad6a-dae1997dd5ab"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:01:58 crc kubenswrapper[4520]: I0130 07:01:58.946936 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9424be29-ccf4-449c-ad6a-dae1997dd5ab-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9424be29-ccf4-449c-ad6a-dae1997dd5ab" (UID: "9424be29-ccf4-449c-ad6a-dae1997dd5ab"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:01:58 crc kubenswrapper[4520]: I0130 07:01:58.997969 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9424be29-ccf4-449c-ad6a-dae1997dd5ab-scripts" (OuterVolumeSpecName: "scripts") pod "9424be29-ccf4-449c-ad6a-dae1997dd5ab" (UID: "9424be29-ccf4-449c-ad6a-dae1997dd5ab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.002381 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9424be29-ccf4-449c-ad6a-dae1997dd5ab-kube-api-access-wtlpz" (OuterVolumeSpecName: "kube-api-access-wtlpz") pod "9424be29-ccf4-449c-ad6a-dae1997dd5ab" (UID: "9424be29-ccf4-449c-ad6a-dae1997dd5ab"). InnerVolumeSpecName "kube-api-access-wtlpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.044828 4520 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9424be29-ccf4-449c-ad6a-dae1997dd5ab-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.044864 4520 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9424be29-ccf4-449c-ad6a-dae1997dd5ab-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.044875 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtlpz\" (UniqueName: \"kubernetes.io/projected/9424be29-ccf4-449c-ad6a-dae1997dd5ab-kube-api-access-wtlpz\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.044886 4520 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9424be29-ccf4-449c-ad6a-dae1997dd5ab-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.062217 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9424be29-ccf4-449c-ad6a-dae1997dd5ab-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9424be29-ccf4-449c-ad6a-dae1997dd5ab" (UID: "9424be29-ccf4-449c-ad6a-dae1997dd5ab"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.138919 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9424be29-ccf4-449c-ad6a-dae1997dd5ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9424be29-ccf4-449c-ad6a-dae1997dd5ab" (UID: "9424be29-ccf4-449c-ad6a-dae1997dd5ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.147325 4520 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9424be29-ccf4-449c-ad6a-dae1997dd5ab-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.147357 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9424be29-ccf4-449c-ad6a-dae1997dd5ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.212583 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9424be29-ccf4-449c-ad6a-dae1997dd5ab-config-data" (OuterVolumeSpecName: "config-data") pod "9424be29-ccf4-449c-ad6a-dae1997dd5ab" (UID: "9424be29-ccf4-449c-ad6a-dae1997dd5ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.257713 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9424be29-ccf4-449c-ad6a-dae1997dd5ab-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.322614 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.323561 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="787adbf3-a537-453d-a7fc-efbbdec67245" containerName="glance-log" containerID="cri-o://ef72d32e988252b7696fb6bdb1d9060db9878a67f2e9e493a010bf5f9aca2e05" gracePeriod=30 Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.323695 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="787adbf3-a537-453d-a7fc-efbbdec67245" containerName="glance-httpd" containerID="cri-o://e8bb2877ea98fb6556ebc703ed33a000fb248bc107256b5ccb28d878fb9b762b" gracePeriod=30 Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.642507 4520 generic.go:334] "Generic (PLEG): container finished" podID="61a58f46-d0e7-4ca3-b01d-52758e84d242" containerID="19f5731f0fdfb18c38c8ab106f065a89a9e0d9069edcf33cabbd9c07f9df1fc0" exitCode=0 Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.642629 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrqhk" event={"ID":"61a58f46-d0e7-4ca3-b01d-52758e84d242","Type":"ContainerDied","Data":"19f5731f0fdfb18c38c8ab106f065a89a9e0d9069edcf33cabbd9c07f9df1fc0"} Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.642688 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrqhk" event={"ID":"61a58f46-d0e7-4ca3-b01d-52758e84d242","Type":"ContainerStarted","Data":"1acd1cedb41b23acacdbe8c8eb111bf96cc8bfa3a7a7b62554a2c5147bd92b48"} Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.649679 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9424be29-ccf4-449c-ad6a-dae1997dd5ab","Type":"ContainerDied","Data":"b0159c197f61740cf184c06bdfb93a99bd05d2be5f65f9ef8ead7f6ad0961484"} Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.649764 4520 scope.go:117] "RemoveContainer" containerID="23266db8726f41f3525bfd8730816a6c7cb62872913468ab7cc524c262a4e89c" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.650083 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.688738 4520 generic.go:334] "Generic (PLEG): container finished" podID="787adbf3-a537-453d-a7fc-efbbdec67245" containerID="ef72d32e988252b7696fb6bdb1d9060db9878a67f2e9e493a010bf5f9aca2e05" exitCode=143 Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.688807 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"787adbf3-a537-453d-a7fc-efbbdec67245","Type":"ContainerDied","Data":"ef72d32e988252b7696fb6bdb1d9060db9878a67f2e9e493a010bf5f9aca2e05"} Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.706231 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.715039 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.735854 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:01:59 crc kubenswrapper[4520]: E0130 07:01:59.736280 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9424be29-ccf4-449c-ad6a-dae1997dd5ab" containerName="proxy-httpd" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.736299 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="9424be29-ccf4-449c-ad6a-dae1997dd5ab" containerName="proxy-httpd" Jan 30 07:01:59 crc kubenswrapper[4520]: E0130 07:01:59.736319 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9424be29-ccf4-449c-ad6a-dae1997dd5ab" containerName="ceilometer-central-agent" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.736327 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="9424be29-ccf4-449c-ad6a-dae1997dd5ab" containerName="ceilometer-central-agent" Jan 30 07:01:59 crc kubenswrapper[4520]: E0130 07:01:59.736336 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9424be29-ccf4-449c-ad6a-dae1997dd5ab" containerName="ceilometer-notification-agent" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.736341 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="9424be29-ccf4-449c-ad6a-dae1997dd5ab" containerName="ceilometer-notification-agent" Jan 30 07:01:59 crc kubenswrapper[4520]: E0130 07:01:59.736356 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9424be29-ccf4-449c-ad6a-dae1997dd5ab" containerName="sg-core" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.736362 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="9424be29-ccf4-449c-ad6a-dae1997dd5ab" containerName="sg-core" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.737421 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="9424be29-ccf4-449c-ad6a-dae1997dd5ab" containerName="ceilometer-central-agent" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.737458 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="9424be29-ccf4-449c-ad6a-dae1997dd5ab" containerName="ceilometer-notification-agent" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.737482 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="9424be29-ccf4-449c-ad6a-dae1997dd5ab" containerName="sg-core" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.737492 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="9424be29-ccf4-449c-ad6a-dae1997dd5ab" containerName="proxy-httpd" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.739213 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.747563 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.750577 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.754383 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.771606 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd7bb964-36cf-4819-9468-95da06ce8e86-run-httpd\") pod \"ceilometer-0\" (UID: \"dd7bb964-36cf-4819-9468-95da06ce8e86\") " pod="openstack/ceilometer-0" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.771653 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd7bb964-36cf-4819-9468-95da06ce8e86-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dd7bb964-36cf-4819-9468-95da06ce8e86\") " pod="openstack/ceilometer-0" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.771775 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd7bb964-36cf-4819-9468-95da06ce8e86-scripts\") pod \"ceilometer-0\" (UID: \"dd7bb964-36cf-4819-9468-95da06ce8e86\") " pod="openstack/ceilometer-0" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.771833 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd7bb964-36cf-4819-9468-95da06ce8e86-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dd7bb964-36cf-4819-9468-95da06ce8e86\") " pod="openstack/ceilometer-0" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.771861 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd7bb964-36cf-4819-9468-95da06ce8e86-config-data\") pod \"ceilometer-0\" (UID: \"dd7bb964-36cf-4819-9468-95da06ce8e86\") " pod="openstack/ceilometer-0" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.771937 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8ktn\" (UniqueName: \"kubernetes.io/projected/dd7bb964-36cf-4819-9468-95da06ce8e86-kube-api-access-d8ktn\") pod \"ceilometer-0\" (UID: \"dd7bb964-36cf-4819-9468-95da06ce8e86\") " pod="openstack/ceilometer-0" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.771971 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd7bb964-36cf-4819-9468-95da06ce8e86-log-httpd\") pod \"ceilometer-0\" (UID: \"dd7bb964-36cf-4819-9468-95da06ce8e86\") " pod="openstack/ceilometer-0" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.873553 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd7bb964-36cf-4819-9468-95da06ce8e86-run-httpd\") pod \"ceilometer-0\" (UID: \"dd7bb964-36cf-4819-9468-95da06ce8e86\") " pod="openstack/ceilometer-0" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.873599 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd7bb964-36cf-4819-9468-95da06ce8e86-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dd7bb964-36cf-4819-9468-95da06ce8e86\") " pod="openstack/ceilometer-0" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.873679 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd7bb964-36cf-4819-9468-95da06ce8e86-scripts\") pod \"ceilometer-0\" (UID: \"dd7bb964-36cf-4819-9468-95da06ce8e86\") " pod="openstack/ceilometer-0" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.873714 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd7bb964-36cf-4819-9468-95da06ce8e86-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dd7bb964-36cf-4819-9468-95da06ce8e86\") " pod="openstack/ceilometer-0" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.873733 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd7bb964-36cf-4819-9468-95da06ce8e86-config-data\") pod \"ceilometer-0\" (UID: \"dd7bb964-36cf-4819-9468-95da06ce8e86\") " pod="openstack/ceilometer-0" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.873776 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8ktn\" (UniqueName: \"kubernetes.io/projected/dd7bb964-36cf-4819-9468-95da06ce8e86-kube-api-access-d8ktn\") pod \"ceilometer-0\" (UID: \"dd7bb964-36cf-4819-9468-95da06ce8e86\") " pod="openstack/ceilometer-0" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.873801 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd7bb964-36cf-4819-9468-95da06ce8e86-log-httpd\") pod \"ceilometer-0\" (UID: \"dd7bb964-36cf-4819-9468-95da06ce8e86\") " pod="openstack/ceilometer-0" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.874017 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd7bb964-36cf-4819-9468-95da06ce8e86-run-httpd\") pod \"ceilometer-0\" (UID: \"dd7bb964-36cf-4819-9468-95da06ce8e86\") " pod="openstack/ceilometer-0" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.874127 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd7bb964-36cf-4819-9468-95da06ce8e86-log-httpd\") pod \"ceilometer-0\" (UID: \"dd7bb964-36cf-4819-9468-95da06ce8e86\") " pod="openstack/ceilometer-0" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.878398 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd7bb964-36cf-4819-9468-95da06ce8e86-scripts\") pod \"ceilometer-0\" (UID: \"dd7bb964-36cf-4819-9468-95da06ce8e86\") " pod="openstack/ceilometer-0" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.880219 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd7bb964-36cf-4819-9468-95da06ce8e86-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dd7bb964-36cf-4819-9468-95da06ce8e86\") " pod="openstack/ceilometer-0" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.882272 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd7bb964-36cf-4819-9468-95da06ce8e86-config-data\") pod \"ceilometer-0\" (UID: \"dd7bb964-36cf-4819-9468-95da06ce8e86\") " pod="openstack/ceilometer-0" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.883000 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd7bb964-36cf-4819-9468-95da06ce8e86-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dd7bb964-36cf-4819-9468-95da06ce8e86\") " pod="openstack/ceilometer-0" Jan 30 07:01:59 crc kubenswrapper[4520]: I0130 07:01:59.893107 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8ktn\" (UniqueName: \"kubernetes.io/projected/dd7bb964-36cf-4819-9468-95da06ce8e86-kube-api-access-d8ktn\") pod \"ceilometer-0\" (UID: \"dd7bb964-36cf-4819-9468-95da06ce8e86\") " pod="openstack/ceilometer-0" Jan 30 07:02:00 crc kubenswrapper[4520]: I0130 07:02:00.054831 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 07:02:00 crc kubenswrapper[4520]: I0130 07:02:00.340098 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-7c4c8c7bb-pfwmd" podUID="2c99ef8b-2ef2-4e57-996c-d74afbaa161e" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.188:8000/healthcheck\": read tcp 10.217.0.2:42334->10.217.0.188:8000: read: connection reset by peer" Jan 30 07:02:00 crc kubenswrapper[4520]: I0130 07:02:00.340685 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-7c4c8c7bb-pfwmd" podUID="2c99ef8b-2ef2-4e57-996c-d74afbaa161e" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.188:8000/healthcheck\": dial tcp 10.217.0.188:8000: connect: connection refused" Jan 30 07:02:00 crc kubenswrapper[4520]: I0130 07:02:00.544733 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 07:02:00 crc kubenswrapper[4520]: I0130 07:02:00.544973 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="be9a112d-54bd-4ecd-bd57-5649fb5ae79f" containerName="glance-log" containerID="cri-o://37c52a65cacf4ff4c8e717a6432b07b0aa845022d36485b2f1128811dd9e3c3a" gracePeriod=30 Jan 30 07:02:00 crc kubenswrapper[4520]: I0130 07:02:00.545067 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="be9a112d-54bd-4ecd-bd57-5649fb5ae79f" containerName="glance-httpd" containerID="cri-o://031bb27045d77e81a0ede0f6f9ccfebeb7a22a66da950d24f197e63f2ec65d97" gracePeriod=30 Jan 30 07:02:00 crc kubenswrapper[4520]: I0130 07:02:00.709608 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9424be29-ccf4-449c-ad6a-dae1997dd5ab" path="/var/lib/kubelet/pods/9424be29-ccf4-449c-ad6a-dae1997dd5ab/volumes" Jan 30 07:02:00 crc kubenswrapper[4520]: I0130 07:02:00.713068 4520 generic.go:334] "Generic (PLEG): container finished" podID="2c99ef8b-2ef2-4e57-996c-d74afbaa161e" containerID="512ec0cfd4c2aa6ff9b71fad6954f1bf869d19bc298f81792cad52578cc47ac2" exitCode=0 Jan 30 07:02:00 crc kubenswrapper[4520]: I0130 07:02:00.713161 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7c4c8c7bb-pfwmd" event={"ID":"2c99ef8b-2ef2-4e57-996c-d74afbaa161e","Type":"ContainerDied","Data":"512ec0cfd4c2aa6ff9b71fad6954f1bf869d19bc298f81792cad52578cc47ac2"} Jan 30 07:02:00 crc kubenswrapper[4520]: I0130 07:02:00.721744 4520 generic.go:334] "Generic (PLEG): container finished" podID="be9a112d-54bd-4ecd-bd57-5649fb5ae79f" containerID="37c52a65cacf4ff4c8e717a6432b07b0aa845022d36485b2f1128811dd9e3c3a" exitCode=143 Jan 30 07:02:00 crc kubenswrapper[4520]: I0130 07:02:00.721794 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"be9a112d-54bd-4ecd-bd57-5649fb5ae79f","Type":"ContainerDied","Data":"37c52a65cacf4ff4c8e717a6432b07b0aa845022d36485b2f1128811dd9e3c3a"} Jan 30 07:02:01 crc kubenswrapper[4520]: I0130 07:02:01.163892 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:02:02 crc kubenswrapper[4520]: I0130 07:02:02.753728 4520 generic.go:334] "Generic (PLEG): container finished" podID="787adbf3-a537-453d-a7fc-efbbdec67245" containerID="e8bb2877ea98fb6556ebc703ed33a000fb248bc107256b5ccb28d878fb9b762b" exitCode=0 Jan 30 07:02:02 crc kubenswrapper[4520]: I0130 07:02:02.753830 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"787adbf3-a537-453d-a7fc-efbbdec67245","Type":"ContainerDied","Data":"e8bb2877ea98fb6556ebc703ed33a000fb248bc107256b5ccb28d878fb9b762b"} Jan 30 07:02:04 crc kubenswrapper[4520]: I0130 07:02:04.354379 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="787adbf3-a537-453d-a7fc-efbbdec67245" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.151:9292/healthcheck\": dial tcp 10.217.0.151:9292: connect: connection refused" Jan 30 07:02:04 crc kubenswrapper[4520]: I0130 07:02:04.354417 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="787adbf3-a537-453d-a7fc-efbbdec67245" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.151:9292/healthcheck\": dial tcp 10.217.0.151:9292: connect: connection refused" Jan 30 07:02:04 crc kubenswrapper[4520]: I0130 07:02:04.381919 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-7c4c8c7bb-pfwmd" podUID="2c99ef8b-2ef2-4e57-996c-d74afbaa161e" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.188:8000/healthcheck\": dial tcp 10.217.0.188:8000: connect: connection refused" Jan 30 07:02:04 crc kubenswrapper[4520]: I0130 07:02:04.776635 4520 generic.go:334] "Generic (PLEG): container finished" podID="be9a112d-54bd-4ecd-bd57-5649fb5ae79f" containerID="031bb27045d77e81a0ede0f6f9ccfebeb7a22a66da950d24f197e63f2ec65d97" exitCode=0 Jan 30 07:02:04 crc kubenswrapper[4520]: I0130 07:02:04.776695 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"be9a112d-54bd-4ecd-bd57-5649fb5ae79f","Type":"ContainerDied","Data":"031bb27045d77e81a0ede0f6f9ccfebeb7a22a66da950d24f197e63f2ec65d97"} Jan 30 07:02:05 crc kubenswrapper[4520]: I0130 07:02:05.562343 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7hthh" Jan 30 07:02:05 crc kubenswrapper[4520]: I0130 07:02:05.608580 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7hthh" Jan 30 07:02:05 crc kubenswrapper[4520]: I0130 07:02:05.957446 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-97977bbb9-v5xms" Jan 30 07:02:06 crc kubenswrapper[4520]: I0130 07:02:06.018856 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-8b6685bb8-85zvh"] Jan 30 07:02:06 crc kubenswrapper[4520]: I0130 07:02:06.122871 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-fc44c7cd8-mhtpx" Jan 30 07:02:06 crc kubenswrapper[4520]: I0130 07:02:06.171317 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-dc9bfd46d-rs8m5"] Jan 30 07:02:06 crc kubenswrapper[4520]: I0130 07:02:06.404020 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7hthh"] Jan 30 07:02:06 crc kubenswrapper[4520]: I0130 07:02:06.813712 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7hthh" podUID="843b1d9d-26f2-42d5-b8ff-331b66efd5f8" containerName="registry-server" containerID="cri-o://b99c38bf6ffe9fb2362232ef28015a21fb9eefcaaf49a2018073a81502294137" gracePeriod=2 Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.601926 4520 scope.go:117] "RemoveContainer" containerID="1200d1a51eb59a07bd155e2cd066f1e8eaf9d811142a9383fa3598df70c08479" Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.634132 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.654907 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8b6685bb8-85zvh" Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.655043 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-dc9bfd46d-rs8m5" Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.684245 4520 scope.go:117] "RemoveContainer" containerID="1ecf0861fabebb22b8a12d75877898faf2d5ce39be06cbc05afae3cadd820a5e" Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.706668 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015102e7-4492-43ab-b32c-8938836fc162-combined-ca-bundle\") pod \"015102e7-4492-43ab-b32c-8938836fc162\" (UID: \"015102e7-4492-43ab-b32c-8938836fc162\") " Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.706706 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/787adbf3-a537-453d-a7fc-efbbdec67245-scripts\") pod \"787adbf3-a537-453d-a7fc-efbbdec67245\" (UID: \"787adbf3-a537-453d-a7fc-efbbdec67245\") " Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.706767 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"787adbf3-a537-453d-a7fc-efbbdec67245\" (UID: \"787adbf3-a537-453d-a7fc-efbbdec67245\") " Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.706804 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2541934c-0c62-4b72-b405-bfc672fc5568-config-data-custom\") pod \"2541934c-0c62-4b72-b405-bfc672fc5568\" (UID: \"2541934c-0c62-4b72-b405-bfc672fc5568\") " Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.706847 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/787adbf3-a537-453d-a7fc-efbbdec67245-public-tls-certs\") pod \"787adbf3-a537-453d-a7fc-efbbdec67245\" (UID: \"787adbf3-a537-453d-a7fc-efbbdec67245\") " Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.706906 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2541934c-0c62-4b72-b405-bfc672fc5568-config-data\") pod \"2541934c-0c62-4b72-b405-bfc672fc5568\" (UID: \"2541934c-0c62-4b72-b405-bfc672fc5568\") " Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.706927 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvr5n\" (UniqueName: \"kubernetes.io/projected/2541934c-0c62-4b72-b405-bfc672fc5568-kube-api-access-wvr5n\") pod \"2541934c-0c62-4b72-b405-bfc672fc5568\" (UID: \"2541934c-0c62-4b72-b405-bfc672fc5568\") " Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.706955 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/787adbf3-a537-453d-a7fc-efbbdec67245-httpd-run\") pod \"787adbf3-a537-453d-a7fc-efbbdec67245\" (UID: \"787adbf3-a537-453d-a7fc-efbbdec67245\") " Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.706987 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/015102e7-4492-43ab-b32c-8938836fc162-config-data\") pod \"015102e7-4492-43ab-b32c-8938836fc162\" (UID: \"015102e7-4492-43ab-b32c-8938836fc162\") " Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.707005 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/787adbf3-a537-453d-a7fc-efbbdec67245-logs\") pod \"787adbf3-a537-453d-a7fc-efbbdec67245\" (UID: \"787adbf3-a537-453d-a7fc-efbbdec67245\") " Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.707032 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/787adbf3-a537-453d-a7fc-efbbdec67245-config-data\") pod \"787adbf3-a537-453d-a7fc-efbbdec67245\" (UID: \"787adbf3-a537-453d-a7fc-efbbdec67245\") " Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.707070 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz5b9\" (UniqueName: \"kubernetes.io/projected/787adbf3-a537-453d-a7fc-efbbdec67245-kube-api-access-sz5b9\") pod \"787adbf3-a537-453d-a7fc-efbbdec67245\" (UID: \"787adbf3-a537-453d-a7fc-efbbdec67245\") " Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.707141 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/015102e7-4492-43ab-b32c-8938836fc162-config-data-custom\") pod \"015102e7-4492-43ab-b32c-8938836fc162\" (UID: \"015102e7-4492-43ab-b32c-8938836fc162\") " Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.707202 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/787adbf3-a537-453d-a7fc-efbbdec67245-combined-ca-bundle\") pod \"787adbf3-a537-453d-a7fc-efbbdec67245\" (UID: \"787adbf3-a537-453d-a7fc-efbbdec67245\") " Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.707275 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mdvk\" (UniqueName: \"kubernetes.io/projected/015102e7-4492-43ab-b32c-8938836fc162-kube-api-access-6mdvk\") pod \"015102e7-4492-43ab-b32c-8938836fc162\" (UID: \"015102e7-4492-43ab-b32c-8938836fc162\") " Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.707316 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2541934c-0c62-4b72-b405-bfc672fc5568-combined-ca-bundle\") pod \"2541934c-0c62-4b72-b405-bfc672fc5568\" (UID: \"2541934c-0c62-4b72-b405-bfc672fc5568\") " Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.709027 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/787adbf3-a537-453d-a7fc-efbbdec67245-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "787adbf3-a537-453d-a7fc-efbbdec67245" (UID: "787adbf3-a537-453d-a7fc-efbbdec67245"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.715364 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/787adbf3-a537-453d-a7fc-efbbdec67245-logs" (OuterVolumeSpecName: "logs") pod "787adbf3-a537-453d-a7fc-efbbdec67245" (UID: "787adbf3-a537-453d-a7fc-efbbdec67245"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.728044 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/015102e7-4492-43ab-b32c-8938836fc162-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "015102e7-4492-43ab-b32c-8938836fc162" (UID: "015102e7-4492-43ab-b32c-8938836fc162"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.755588 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/787adbf3-a537-453d-a7fc-efbbdec67245-scripts" (OuterVolumeSpecName: "scripts") pod "787adbf3-a537-453d-a7fc-efbbdec67245" (UID: "787adbf3-a537-453d-a7fc-efbbdec67245"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.756303 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/787adbf3-a537-453d-a7fc-efbbdec67245-kube-api-access-sz5b9" (OuterVolumeSpecName: "kube-api-access-sz5b9") pod "787adbf3-a537-453d-a7fc-efbbdec67245" (UID: "787adbf3-a537-453d-a7fc-efbbdec67245"). InnerVolumeSpecName "kube-api-access-sz5b9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.756447 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2541934c-0c62-4b72-b405-bfc672fc5568-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2541934c-0c62-4b72-b405-bfc672fc5568" (UID: "2541934c-0c62-4b72-b405-bfc672fc5568"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.756964 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "787adbf3-a537-453d-a7fc-efbbdec67245" (UID: "787adbf3-a537-453d-a7fc-efbbdec67245"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.779702 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/015102e7-4492-43ab-b32c-8938836fc162-kube-api-access-6mdvk" (OuterVolumeSpecName: "kube-api-access-6mdvk") pod "015102e7-4492-43ab-b32c-8938836fc162" (UID: "015102e7-4492-43ab-b32c-8938836fc162"). InnerVolumeSpecName "kube-api-access-6mdvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.779921 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2541934c-0c62-4b72-b405-bfc672fc5568-kube-api-access-wvr5n" (OuterVolumeSpecName: "kube-api-access-wvr5n") pod "2541934c-0c62-4b72-b405-bfc672fc5568" (UID: "2541934c-0c62-4b72-b405-bfc672fc5568"). InnerVolumeSpecName "kube-api-access-wvr5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.802110 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2541934c-0c62-4b72-b405-bfc672fc5568-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2541934c-0c62-4b72-b405-bfc672fc5568" (UID: "2541934c-0c62-4b72-b405-bfc672fc5568"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.812677 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2541934c-0c62-4b72-b405-bfc672fc5568-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.812712 4520 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/787adbf3-a537-453d-a7fc-efbbdec67245-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.812736 4520 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.812746 4520 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2541934c-0c62-4b72-b405-bfc672fc5568-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.812756 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvr5n\" (UniqueName: \"kubernetes.io/projected/2541934c-0c62-4b72-b405-bfc672fc5568-kube-api-access-wvr5n\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.812766 4520 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/787adbf3-a537-453d-a7fc-efbbdec67245-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.812774 4520 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/787adbf3-a537-453d-a7fc-efbbdec67245-logs\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.812782 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sz5b9\" (UniqueName: \"kubernetes.io/projected/787adbf3-a537-453d-a7fc-efbbdec67245-kube-api-access-sz5b9\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.812793 4520 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/015102e7-4492-43ab-b32c-8938836fc162-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.812801 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mdvk\" (UniqueName: \"kubernetes.io/projected/015102e7-4492-43ab-b32c-8938836fc162-kube-api-access-6mdvk\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.833140 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/787adbf3-a537-453d-a7fc-efbbdec67245-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "787adbf3-a537-453d-a7fc-efbbdec67245" (UID: "787adbf3-a537-453d-a7fc-efbbdec67245"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.912309 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/015102e7-4492-43ab-b32c-8938836fc162-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "015102e7-4492-43ab-b32c-8938836fc162" (UID: "015102e7-4492-43ab-b32c-8938836fc162"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.924834 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/015102e7-4492-43ab-b32c-8938836fc162-config-data" (OuterVolumeSpecName: "config-data") pod "015102e7-4492-43ab-b32c-8938836fc162" (UID: "015102e7-4492-43ab-b32c-8938836fc162"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.926966 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"787adbf3-a537-453d-a7fc-efbbdec67245","Type":"ContainerDied","Data":"1abceda6299f4a5ecf186330b1fdbbcc1c2b0ab0b2f8cea7f1a417289ab3a72e"} Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.927080 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.932582 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/787adbf3-a537-453d-a7fc-efbbdec67245-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.932605 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015102e7-4492-43ab-b32c-8938836fc162-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.932617 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/015102e7-4492-43ab-b32c-8938836fc162-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.935730 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-dc9bfd46d-rs8m5" event={"ID":"2541934c-0c62-4b72-b405-bfc672fc5568","Type":"ContainerDied","Data":"a700c772d114507a9f099ecb09acd8d40ce292c53899924f1e3a26a466f4ba7d"} Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.935795 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-dc9bfd46d-rs8m5" Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.951010 4520 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.955258 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8b6685bb8-85zvh" event={"ID":"015102e7-4492-43ab-b32c-8938836fc162","Type":"ContainerDied","Data":"0fd0388d8508d4cd8b4ae0ec9b1917f063cbfd70d9fa6ed70ed2ac660fbc0ed7"} Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.955569 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8b6685bb8-85zvh" Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.965484 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/787adbf3-a537-453d-a7fc-efbbdec67245-config-data" (OuterVolumeSpecName: "config-data") pod "787adbf3-a537-453d-a7fc-efbbdec67245" (UID: "787adbf3-a537-453d-a7fc-efbbdec67245"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.974771 4520 generic.go:334] "Generic (PLEG): container finished" podID="843b1d9d-26f2-42d5-b8ff-331b66efd5f8" containerID="b99c38bf6ffe9fb2362232ef28015a21fb9eefcaaf49a2018073a81502294137" exitCode=0 Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.974917 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7hthh" event={"ID":"843b1d9d-26f2-42d5-b8ff-331b66efd5f8","Type":"ContainerDied","Data":"b99c38bf6ffe9fb2362232ef28015a21fb9eefcaaf49a2018073a81502294137"} Jan 30 07:02:07 crc kubenswrapper[4520]: I0130 07:02:07.990246 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/787adbf3-a537-453d-a7fc-efbbdec67245-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "787adbf3-a537-453d-a7fc-efbbdec67245" (UID: "787adbf3-a537-453d-a7fc-efbbdec67245"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.048101 4520 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.048145 4520 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/787adbf3-a537-453d-a7fc-efbbdec67245-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.048161 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/787adbf3-a537-453d-a7fc-efbbdec67245-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.053574 4520 scope.go:117] "RemoveContainer" containerID="a4ba56155730aa47005206621dbdbf22dc6eff2744b886b1ddf481ef64c91099" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.054590 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7hthh" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.083200 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-8b6685bb8-85zvh"] Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.102744 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2541934c-0c62-4b72-b405-bfc672fc5568-config-data" (OuterVolumeSpecName: "config-data") pod "2541934c-0c62-4b72-b405-bfc672fc5568" (UID: "2541934c-0c62-4b72-b405-bfc672fc5568"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.155678 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/843b1d9d-26f2-42d5-b8ff-331b66efd5f8-catalog-content\") pod \"843b1d9d-26f2-42d5-b8ff-331b66efd5f8\" (UID: \"843b1d9d-26f2-42d5-b8ff-331b66efd5f8\") " Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.156301 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hn48\" (UniqueName: \"kubernetes.io/projected/843b1d9d-26f2-42d5-b8ff-331b66efd5f8-kube-api-access-2hn48\") pod \"843b1d9d-26f2-42d5-b8ff-331b66efd5f8\" (UID: \"843b1d9d-26f2-42d5-b8ff-331b66efd5f8\") " Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.156452 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/843b1d9d-26f2-42d5-b8ff-331b66efd5f8-utilities\") pod \"843b1d9d-26f2-42d5-b8ff-331b66efd5f8\" (UID: \"843b1d9d-26f2-42d5-b8ff-331b66efd5f8\") " Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.158073 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-8b6685bb8-85zvh"] Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.158494 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2541934c-0c62-4b72-b405-bfc672fc5568-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.160380 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/843b1d9d-26f2-42d5-b8ff-331b66efd5f8-utilities" (OuterVolumeSpecName: "utilities") pod "843b1d9d-26f2-42d5-b8ff-331b66efd5f8" (UID: "843b1d9d-26f2-42d5-b8ff-331b66efd5f8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.170691 4520 scope.go:117] "RemoveContainer" containerID="e8bb2877ea98fb6556ebc703ed33a000fb248bc107256b5ccb28d878fb9b762b" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.173590 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/843b1d9d-26f2-42d5-b8ff-331b66efd5f8-kube-api-access-2hn48" (OuterVolumeSpecName: "kube-api-access-2hn48") pod "843b1d9d-26f2-42d5-b8ff-331b66efd5f8" (UID: "843b1d9d-26f2-42d5-b8ff-331b66efd5f8"). InnerVolumeSpecName "kube-api-access-2hn48". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.282839 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hn48\" (UniqueName: \"kubernetes.io/projected/843b1d9d-26f2-42d5-b8ff-331b66efd5f8-kube-api-access-2hn48\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.282894 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/843b1d9d-26f2-42d5-b8ff-331b66efd5f8-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.295660 4520 scope.go:117] "RemoveContainer" containerID="ef72d32e988252b7696fb6bdb1d9060db9878a67f2e9e493a010bf5f9aca2e05" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.312718 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7c4c8c7bb-pfwmd" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.317273 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-dc9bfd46d-rs8m5"] Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.345427 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-dc9bfd46d-rs8m5"] Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.362864 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/843b1d9d-26f2-42d5-b8ff-331b66efd5f8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "843b1d9d-26f2-42d5-b8ff-331b66efd5f8" (UID: "843b1d9d-26f2-42d5-b8ff-331b66efd5f8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.366630 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.371992 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.376887 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.377326 4520 scope.go:117] "RemoveContainer" containerID="9eb896f1ea30dfa0588c510e5a206809aeaf9286c169bbabcce5066533d7ac90" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.384928 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c99ef8b-2ef2-4e57-996c-d74afbaa161e-config-data-custom\") pod \"2c99ef8b-2ef2-4e57-996c-d74afbaa161e\" (UID: \"2c99ef8b-2ef2-4e57-996c-d74afbaa161e\") " Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.384972 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tng5s\" (UniqueName: \"kubernetes.io/projected/2c99ef8b-2ef2-4e57-996c-d74afbaa161e-kube-api-access-tng5s\") pod \"2c99ef8b-2ef2-4e57-996c-d74afbaa161e\" (UID: \"2c99ef8b-2ef2-4e57-996c-d74afbaa161e\") " Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.385048 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c99ef8b-2ef2-4e57-996c-d74afbaa161e-config-data\") pod \"2c99ef8b-2ef2-4e57-996c-d74afbaa161e\" (UID: \"2c99ef8b-2ef2-4e57-996c-d74afbaa161e\") " Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.385272 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c99ef8b-2ef2-4e57-996c-d74afbaa161e-combined-ca-bundle\") pod \"2c99ef8b-2ef2-4e57-996c-d74afbaa161e\" (UID: \"2c99ef8b-2ef2-4e57-996c-d74afbaa161e\") " Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.390540 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/843b1d9d-26f2-42d5-b8ff-331b66efd5f8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.396719 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c99ef8b-2ef2-4e57-996c-d74afbaa161e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2c99ef8b-2ef2-4e57-996c-d74afbaa161e" (UID: "2c99ef8b-2ef2-4e57-996c-d74afbaa161e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.407121 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 07:02:08 crc kubenswrapper[4520]: E0130 07:02:08.408392 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="843b1d9d-26f2-42d5-b8ff-331b66efd5f8" containerName="registry-server" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.408480 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="843b1d9d-26f2-42d5-b8ff-331b66efd5f8" containerName="registry-server" Jan 30 07:02:08 crc kubenswrapper[4520]: E0130 07:02:08.408621 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be9a112d-54bd-4ecd-bd57-5649fb5ae79f" containerName="glance-httpd" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.408694 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="be9a112d-54bd-4ecd-bd57-5649fb5ae79f" containerName="glance-httpd" Jan 30 07:02:08 crc kubenswrapper[4520]: E0130 07:02:08.408759 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2541934c-0c62-4b72-b405-bfc672fc5568" containerName="heat-cfnapi" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.408819 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="2541934c-0c62-4b72-b405-bfc672fc5568" containerName="heat-cfnapi" Jan 30 07:02:08 crc kubenswrapper[4520]: E0130 07:02:08.408866 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="015102e7-4492-43ab-b32c-8938836fc162" containerName="heat-api" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.408909 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="015102e7-4492-43ab-b32c-8938836fc162" containerName="heat-api" Jan 30 07:02:08 crc kubenswrapper[4520]: E0130 07:02:08.408965 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be9a112d-54bd-4ecd-bd57-5649fb5ae79f" containerName="glance-log" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.409014 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="be9a112d-54bd-4ecd-bd57-5649fb5ae79f" containerName="glance-log" Jan 30 07:02:08 crc kubenswrapper[4520]: E0130 07:02:08.409063 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="015102e7-4492-43ab-b32c-8938836fc162" containerName="heat-api" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.409108 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="015102e7-4492-43ab-b32c-8938836fc162" containerName="heat-api" Jan 30 07:02:08 crc kubenswrapper[4520]: E0130 07:02:08.409190 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="843b1d9d-26f2-42d5-b8ff-331b66efd5f8" containerName="extract-content" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.409257 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="843b1d9d-26f2-42d5-b8ff-331b66efd5f8" containerName="extract-content" Jan 30 07:02:08 crc kubenswrapper[4520]: E0130 07:02:08.409306 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c99ef8b-2ef2-4e57-996c-d74afbaa161e" containerName="heat-cfnapi" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.409352 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c99ef8b-2ef2-4e57-996c-d74afbaa161e" containerName="heat-cfnapi" Jan 30 07:02:08 crc kubenswrapper[4520]: E0130 07:02:08.409409 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2541934c-0c62-4b72-b405-bfc672fc5568" containerName="heat-cfnapi" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.409463 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="2541934c-0c62-4b72-b405-bfc672fc5568" containerName="heat-cfnapi" Jan 30 07:02:08 crc kubenswrapper[4520]: E0130 07:02:08.409552 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="843b1d9d-26f2-42d5-b8ff-331b66efd5f8" containerName="extract-utilities" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.409605 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="843b1d9d-26f2-42d5-b8ff-331b66efd5f8" containerName="extract-utilities" Jan 30 07:02:08 crc kubenswrapper[4520]: E0130 07:02:08.409675 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="787adbf3-a537-453d-a7fc-efbbdec67245" containerName="glance-log" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.409731 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="787adbf3-a537-453d-a7fc-efbbdec67245" containerName="glance-log" Jan 30 07:02:08 crc kubenswrapper[4520]: E0130 07:02:08.418704 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="787adbf3-a537-453d-a7fc-efbbdec67245" containerName="glance-httpd" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.418851 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="787adbf3-a537-453d-a7fc-efbbdec67245" containerName="glance-httpd" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.421782 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="787adbf3-a537-453d-a7fc-efbbdec67245" containerName="glance-log" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.421864 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="be9a112d-54bd-4ecd-bd57-5649fb5ae79f" containerName="glance-log" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.421933 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="787adbf3-a537-453d-a7fc-efbbdec67245" containerName="glance-httpd" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.421994 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="843b1d9d-26f2-42d5-b8ff-331b66efd5f8" containerName="registry-server" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.422050 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="be9a112d-54bd-4ecd-bd57-5649fb5ae79f" containerName="glance-httpd" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.422097 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="2541934c-0c62-4b72-b405-bfc672fc5568" containerName="heat-cfnapi" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.422147 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c99ef8b-2ef2-4e57-996c-d74afbaa161e" containerName="heat-cfnapi" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.422197 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="2541934c-0c62-4b72-b405-bfc672fc5568" containerName="heat-cfnapi" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.422249 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="015102e7-4492-43ab-b32c-8938836fc162" containerName="heat-api" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.422747 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="015102e7-4492-43ab-b32c-8938836fc162" containerName="heat-api" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.414155 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c99ef8b-2ef2-4e57-996c-d74afbaa161e-kube-api-access-tng5s" (OuterVolumeSpecName: "kube-api-access-tng5s") pod "2c99ef8b-2ef2-4e57-996c-d74afbaa161e" (UID: "2c99ef8b-2ef2-4e57-996c-d74afbaa161e"). InnerVolumeSpecName "kube-api-access-tng5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.423698 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.428563 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.428887 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.437823 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.465007 4520 scope.go:117] "RemoveContainer" containerID="1d6d04a10943b51a0cebea325ad521b8506c9c8595442da6368b9ebf9a52f8d1" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.469602 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:02:08 crc kubenswrapper[4520]: W0130 07:02:08.480030 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd7bb964_36cf_4819_9468_95da06ce8e86.slice/crio-64d3bc2c53bb85f622233e6ed311cd0d4f060042205fe14393c384e51a261e5b WatchSource:0}: Error finding container 64d3bc2c53bb85f622233e6ed311cd0d4f060042205fe14393c384e51a261e5b: Status 404 returned error can't find the container with id 64d3bc2c53bb85f622233e6ed311cd0d4f060042205fe14393c384e51a261e5b Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.482632 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c99ef8b-2ef2-4e57-996c-d74afbaa161e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c99ef8b-2ef2-4e57-996c-d74afbaa161e" (UID: "2c99ef8b-2ef2-4e57-996c-d74afbaa161e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.491006 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\" (UID: \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\") " Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.491196 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-httpd-run\") pod \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\" (UID: \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\") " Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.491225 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-scripts\") pod \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\" (UID: \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\") " Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.491256 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-internal-tls-certs\") pod \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\" (UID: \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\") " Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.491274 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-combined-ca-bundle\") pod \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\" (UID: \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\") " Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.491351 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-logs\") pod \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\" (UID: \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\") " Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.491491 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-config-data\") pod \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\" (UID: \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\") " Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.491543 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdt78\" (UniqueName: \"kubernetes.io/projected/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-kube-api-access-mdt78\") pod \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\" (UID: \"be9a112d-54bd-4ecd-bd57-5649fb5ae79f\") " Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.491897 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/152b98ac-3a20-475d-9e16-b2894c2c8b36-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"152b98ac-3a20-475d-9e16-b2894c2c8b36\") " pod="openstack/glance-default-external-api-0" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.491989 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/152b98ac-3a20-475d-9e16-b2894c2c8b36-logs\") pod \"glance-default-external-api-0\" (UID: \"152b98ac-3a20-475d-9e16-b2894c2c8b36\") " pod="openstack/glance-default-external-api-0" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.492034 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/152b98ac-3a20-475d-9e16-b2894c2c8b36-scripts\") pod \"glance-default-external-api-0\" (UID: \"152b98ac-3a20-475d-9e16-b2894c2c8b36\") " pod="openstack/glance-default-external-api-0" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.492285 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w54ln\" (UniqueName: \"kubernetes.io/projected/152b98ac-3a20-475d-9e16-b2894c2c8b36-kube-api-access-w54ln\") pod \"glance-default-external-api-0\" (UID: \"152b98ac-3a20-475d-9e16-b2894c2c8b36\") " pod="openstack/glance-default-external-api-0" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.492316 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/152b98ac-3a20-475d-9e16-b2894c2c8b36-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"152b98ac-3a20-475d-9e16-b2894c2c8b36\") " pod="openstack/glance-default-external-api-0" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.492338 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"152b98ac-3a20-475d-9e16-b2894c2c8b36\") " pod="openstack/glance-default-external-api-0" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.492386 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/152b98ac-3a20-475d-9e16-b2894c2c8b36-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"152b98ac-3a20-475d-9e16-b2894c2c8b36\") " pod="openstack/glance-default-external-api-0" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.492410 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/152b98ac-3a20-475d-9e16-b2894c2c8b36-config-data\") pod \"glance-default-external-api-0\" (UID: \"152b98ac-3a20-475d-9e16-b2894c2c8b36\") " pod="openstack/glance-default-external-api-0" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.492452 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c99ef8b-2ef2-4e57-996c-d74afbaa161e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.492463 4520 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c99ef8b-2ef2-4e57-996c-d74afbaa161e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.492472 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tng5s\" (UniqueName: \"kubernetes.io/projected/2c99ef8b-2ef2-4e57-996c-d74afbaa161e-kube-api-access-tng5s\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.496018 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-logs" (OuterVolumeSpecName: "logs") pod "be9a112d-54bd-4ecd-bd57-5649fb5ae79f" (UID: "be9a112d-54bd-4ecd-bd57-5649fb5ae79f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.498047 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "be9a112d-54bd-4ecd-bd57-5649fb5ae79f" (UID: "be9a112d-54bd-4ecd-bd57-5649fb5ae79f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.502940 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-scripts" (OuterVolumeSpecName: "scripts") pod "be9a112d-54bd-4ecd-bd57-5649fb5ae79f" (UID: "be9a112d-54bd-4ecd-bd57-5649fb5ae79f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.509750 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-kube-api-access-mdt78" (OuterVolumeSpecName: "kube-api-access-mdt78") pod "be9a112d-54bd-4ecd-bd57-5649fb5ae79f" (UID: "be9a112d-54bd-4ecd-bd57-5649fb5ae79f"). InnerVolumeSpecName "kube-api-access-mdt78". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.509768 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "be9a112d-54bd-4ecd-bd57-5649fb5ae79f" (UID: "be9a112d-54bd-4ecd-bd57-5649fb5ae79f"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.524417 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be9a112d-54bd-4ecd-bd57-5649fb5ae79f" (UID: "be9a112d-54bd-4ecd-bd57-5649fb5ae79f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.561118 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c99ef8b-2ef2-4e57-996c-d74afbaa161e-config-data" (OuterVolumeSpecName: "config-data") pod "2c99ef8b-2ef2-4e57-996c-d74afbaa161e" (UID: "2c99ef8b-2ef2-4e57-996c-d74afbaa161e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.585867 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "be9a112d-54bd-4ecd-bd57-5649fb5ae79f" (UID: "be9a112d-54bd-4ecd-bd57-5649fb5ae79f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.600916 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w54ln\" (UniqueName: \"kubernetes.io/projected/152b98ac-3a20-475d-9e16-b2894c2c8b36-kube-api-access-w54ln\") pod \"glance-default-external-api-0\" (UID: \"152b98ac-3a20-475d-9e16-b2894c2c8b36\") " pod="openstack/glance-default-external-api-0" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.600957 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/152b98ac-3a20-475d-9e16-b2894c2c8b36-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"152b98ac-3a20-475d-9e16-b2894c2c8b36\") " pod="openstack/glance-default-external-api-0" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.600989 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"152b98ac-3a20-475d-9e16-b2894c2c8b36\") " pod="openstack/glance-default-external-api-0" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.601039 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/152b98ac-3a20-475d-9e16-b2894c2c8b36-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"152b98ac-3a20-475d-9e16-b2894c2c8b36\") " pod="openstack/glance-default-external-api-0" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.601065 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/152b98ac-3a20-475d-9e16-b2894c2c8b36-config-data\") pod \"glance-default-external-api-0\" (UID: \"152b98ac-3a20-475d-9e16-b2894c2c8b36\") " pod="openstack/glance-default-external-api-0" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.601085 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/152b98ac-3a20-475d-9e16-b2894c2c8b36-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"152b98ac-3a20-475d-9e16-b2894c2c8b36\") " pod="openstack/glance-default-external-api-0" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.601130 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/152b98ac-3a20-475d-9e16-b2894c2c8b36-logs\") pod \"glance-default-external-api-0\" (UID: \"152b98ac-3a20-475d-9e16-b2894c2c8b36\") " pod="openstack/glance-default-external-api-0" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.601156 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/152b98ac-3a20-475d-9e16-b2894c2c8b36-scripts\") pod \"glance-default-external-api-0\" (UID: \"152b98ac-3a20-475d-9e16-b2894c2c8b36\") " pod="openstack/glance-default-external-api-0" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.601703 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/152b98ac-3a20-475d-9e16-b2894c2c8b36-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"152b98ac-3a20-475d-9e16-b2894c2c8b36\") " pod="openstack/glance-default-external-api-0" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.602307 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/152b98ac-3a20-475d-9e16-b2894c2c8b36-logs\") pod \"glance-default-external-api-0\" (UID: \"152b98ac-3a20-475d-9e16-b2894c2c8b36\") " pod="openstack/glance-default-external-api-0" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.602690 4520 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"152b98ac-3a20-475d-9e16-b2894c2c8b36\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.605246 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/152b98ac-3a20-475d-9e16-b2894c2c8b36-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"152b98ac-3a20-475d-9e16-b2894c2c8b36\") " pod="openstack/glance-default-external-api-0" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.606104 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-config-data" (OuterVolumeSpecName: "config-data") pod "be9a112d-54bd-4ecd-bd57-5649fb5ae79f" (UID: "be9a112d-54bd-4ecd-bd57-5649fb5ae79f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.606383 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/152b98ac-3a20-475d-9e16-b2894c2c8b36-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"152b98ac-3a20-475d-9e16-b2894c2c8b36\") " pod="openstack/glance-default-external-api-0" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.607039 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/152b98ac-3a20-475d-9e16-b2894c2c8b36-config-data\") pod \"glance-default-external-api-0\" (UID: \"152b98ac-3a20-475d-9e16-b2894c2c8b36\") " pod="openstack/glance-default-external-api-0" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.607995 4520 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.608014 4520 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.608024 4520 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.608036 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.608046 4520 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-logs\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.608058 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c99ef8b-2ef2-4e57-996c-d74afbaa161e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.608067 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdt78\" (UniqueName: \"kubernetes.io/projected/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-kube-api-access-mdt78\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.608093 4520 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.622052 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/152b98ac-3a20-475d-9e16-b2894c2c8b36-scripts\") pod \"glance-default-external-api-0\" (UID: \"152b98ac-3a20-475d-9e16-b2894c2c8b36\") " pod="openstack/glance-default-external-api-0" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.635793 4520 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.656914 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w54ln\" (UniqueName: \"kubernetes.io/projected/152b98ac-3a20-475d-9e16-b2894c2c8b36-kube-api-access-w54ln\") pod \"glance-default-external-api-0\" (UID: \"152b98ac-3a20-475d-9e16-b2894c2c8b36\") " pod="openstack/glance-default-external-api-0" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.667144 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"152b98ac-3a20-475d-9e16-b2894c2c8b36\") " pod="openstack/glance-default-external-api-0" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.699710 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="015102e7-4492-43ab-b32c-8938836fc162" path="/var/lib/kubelet/pods/015102e7-4492-43ab-b32c-8938836fc162/volumes" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.700327 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2541934c-0c62-4b72-b405-bfc672fc5568" path="/var/lib/kubelet/pods/2541934c-0c62-4b72-b405-bfc672fc5568/volumes" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.700953 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="787adbf3-a537-453d-a7fc-efbbdec67245" path="/var/lib/kubelet/pods/787adbf3-a537-453d-a7fc-efbbdec67245/volumes" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.711070 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be9a112d-54bd-4ecd-bd57-5649fb5ae79f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.711091 4520 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:08 crc kubenswrapper[4520]: I0130 07:02:08.749968 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.013879 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7c4c8c7bb-pfwmd" event={"ID":"2c99ef8b-2ef2-4e57-996c-d74afbaa161e","Type":"ContainerDied","Data":"c9fb7ee3fa91ea11b57d39df2f3d728a96994401b7f2537408860e9070bb7589"} Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.014206 4520 scope.go:117] "RemoveContainer" containerID="512ec0cfd4c2aa6ff9b71fad6954f1bf869d19bc298f81792cad52578cc47ac2" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.014335 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7c4c8c7bb-pfwmd" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.031730 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cvbx4" event={"ID":"12c4f381-2282-4c3c-8735-8862b07e65dc","Type":"ContainerStarted","Data":"f536a2a6852db4fc7aa42b4b86986e5ac7f2c01a1c28b5abf5bebf08ece6bc32"} Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.041366 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7hthh" event={"ID":"843b1d9d-26f2-42d5-b8ff-331b66efd5f8","Type":"ContainerDied","Data":"25ded74a5d949d5cfe5b4f60b555c7d97289d4d9d42de847af93046f31593e74"} Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.041459 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7hthh" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.046532 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd7bb964-36cf-4819-9468-95da06ce8e86","Type":"ContainerStarted","Data":"64d3bc2c53bb85f622233e6ed311cd0d4f060042205fe14393c384e51a261e5b"} Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.068949 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7c4c8c7bb-pfwmd"] Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.074400 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"be9a112d-54bd-4ecd-bd57-5649fb5ae79f","Type":"ContainerDied","Data":"de4c12bcddde17643951dca029a3f16812a28d56a8fd6bb5ba7452f9e6032fc9"} Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.074502 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.078305 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7c4c8c7bb-pfwmd"] Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.078804 4520 scope.go:117] "RemoveContainer" containerID="b99c38bf6ffe9fb2362232ef28015a21fb9eefcaaf49a2018073a81502294137" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.080081 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-cvbx4" podStartSLOduration=4.127645367 podStartE2EDuration="22.080062596s" podCreationTimestamp="2026-01-30 07:01:47 +0000 UTC" firstStartedPulling="2026-01-30 07:01:49.699625674 +0000 UTC m=+1023.327977856" lastFinishedPulling="2026-01-30 07:02:07.652042903 +0000 UTC m=+1041.280395085" observedRunningTime="2026-01-30 07:02:09.047559578 +0000 UTC m=+1042.675911759" watchObservedRunningTime="2026-01-30 07:02:09.080062596 +0000 UTC m=+1042.708414778" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.093428 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrqhk" event={"ID":"61a58f46-d0e7-4ca3-b01d-52758e84d242","Type":"ContainerStarted","Data":"f62c4efbc1b333335644261e8b0a80b2274b946456f82ac0170cbac94157d029"} Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.104170 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7hthh"] Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.117004 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7hthh"] Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.117046 4520 scope.go:117] "RemoveContainer" containerID="7a924c89e2620139928431b96049e7da9bfa56fc19180750c7791ac6c14e31e9" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.124970 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.128922 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.142624 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.144299 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.146285 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.148432 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.148710 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.189659 4520 scope.go:117] "RemoveContainer" containerID="021bdb919594cc9f63bef45c8d76edf9fda2c438fec7d13a8e7a279ba692f73d" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.224837 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56675890-6143-418e-a190-6ec302798da2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"56675890-6143-418e-a190-6ec302798da2\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.224987 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"56675890-6143-418e-a190-6ec302798da2\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.225050 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56675890-6143-418e-a190-6ec302798da2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"56675890-6143-418e-a190-6ec302798da2\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.225080 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/56675890-6143-418e-a190-6ec302798da2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"56675890-6143-418e-a190-6ec302798da2\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.225160 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56675890-6143-418e-a190-6ec302798da2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"56675890-6143-418e-a190-6ec302798da2\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.225184 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56675890-6143-418e-a190-6ec302798da2-logs\") pod \"glance-default-internal-api-0\" (UID: \"56675890-6143-418e-a190-6ec302798da2\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.225283 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjqlk\" (UniqueName: \"kubernetes.io/projected/56675890-6143-418e-a190-6ec302798da2-kube-api-access-xjqlk\") pod \"glance-default-internal-api-0\" (UID: \"56675890-6143-418e-a190-6ec302798da2\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.225374 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56675890-6143-418e-a190-6ec302798da2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"56675890-6143-418e-a190-6ec302798da2\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.257178 4520 scope.go:117] "RemoveContainer" containerID="031bb27045d77e81a0ede0f6f9ccfebeb7a22a66da950d24f197e63f2ec65d97" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.302040 4520 scope.go:117] "RemoveContainer" containerID="37c52a65cacf4ff4c8e717a6432b07b0aa845022d36485b2f1128811dd9e3c3a" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.318085 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.333956 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56675890-6143-418e-a190-6ec302798da2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"56675890-6143-418e-a190-6ec302798da2\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.334021 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56675890-6143-418e-a190-6ec302798da2-logs\") pod \"glance-default-internal-api-0\" (UID: \"56675890-6143-418e-a190-6ec302798da2\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.334141 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjqlk\" (UniqueName: \"kubernetes.io/projected/56675890-6143-418e-a190-6ec302798da2-kube-api-access-xjqlk\") pod \"glance-default-internal-api-0\" (UID: \"56675890-6143-418e-a190-6ec302798da2\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.334265 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56675890-6143-418e-a190-6ec302798da2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"56675890-6143-418e-a190-6ec302798da2\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.334345 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56675890-6143-418e-a190-6ec302798da2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"56675890-6143-418e-a190-6ec302798da2\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.334394 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"56675890-6143-418e-a190-6ec302798da2\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.334459 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56675890-6143-418e-a190-6ec302798da2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"56675890-6143-418e-a190-6ec302798da2\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.334497 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/56675890-6143-418e-a190-6ec302798da2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"56675890-6143-418e-a190-6ec302798da2\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.335965 4520 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"56675890-6143-418e-a190-6ec302798da2\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.336564 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/56675890-6143-418e-a190-6ec302798da2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"56675890-6143-418e-a190-6ec302798da2\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.342157 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56675890-6143-418e-a190-6ec302798da2-logs\") pod \"glance-default-internal-api-0\" (UID: \"56675890-6143-418e-a190-6ec302798da2\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.352572 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56675890-6143-418e-a190-6ec302798da2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"56675890-6143-418e-a190-6ec302798da2\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.353999 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56675890-6143-418e-a190-6ec302798da2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"56675890-6143-418e-a190-6ec302798da2\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.357249 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56675890-6143-418e-a190-6ec302798da2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"56675890-6143-418e-a190-6ec302798da2\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.358212 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56675890-6143-418e-a190-6ec302798da2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"56675890-6143-418e-a190-6ec302798da2\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.409261 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjqlk\" (UniqueName: \"kubernetes.io/projected/56675890-6143-418e-a190-6ec302798da2-kube-api-access-xjqlk\") pod \"glance-default-internal-api-0\" (UID: \"56675890-6143-418e-a190-6ec302798da2\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.433811 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"56675890-6143-418e-a190-6ec302798da2\") " pod="openstack/glance-default-internal-api-0" Jan 30 07:02:09 crc kubenswrapper[4520]: I0130 07:02:09.502200 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 07:02:10 crc kubenswrapper[4520]: I0130 07:02:10.110413 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"152b98ac-3a20-475d-9e16-b2894c2c8b36","Type":"ContainerStarted","Data":"22e28cb91f1d84515ea9a8a8555c3cc0b7acf41c3c4fb62497c3b517ae6c508d"} Jan 30 07:02:10 crc kubenswrapper[4520]: I0130 07:02:10.112921 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd7bb964-36cf-4819-9468-95da06ce8e86","Type":"ContainerStarted","Data":"ba06a297e6140e0b7c364e07db391210f4c619bdfbd792bec27058564f72390f"} Jan 30 07:02:10 crc kubenswrapper[4520]: I0130 07:02:10.321556 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 07:02:10 crc kubenswrapper[4520]: I0130 07:02:10.698203 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c99ef8b-2ef2-4e57-996c-d74afbaa161e" path="/var/lib/kubelet/pods/2c99ef8b-2ef2-4e57-996c-d74afbaa161e/volumes" Jan 30 07:02:10 crc kubenswrapper[4520]: I0130 07:02:10.698978 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="843b1d9d-26f2-42d5-b8ff-331b66efd5f8" path="/var/lib/kubelet/pods/843b1d9d-26f2-42d5-b8ff-331b66efd5f8/volumes" Jan 30 07:02:10 crc kubenswrapper[4520]: I0130 07:02:10.699688 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be9a112d-54bd-4ecd-bd57-5649fb5ae79f" path="/var/lib/kubelet/pods/be9a112d-54bd-4ecd-bd57-5649fb5ae79f/volumes" Jan 30 07:02:11 crc kubenswrapper[4520]: I0130 07:02:11.084473 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5ccfff75db-kf7nx" Jan 30 07:02:11 crc kubenswrapper[4520]: I0130 07:02:11.150181 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6b7486bc6d-lhplk"] Jan 30 07:02:11 crc kubenswrapper[4520]: I0130 07:02:11.152268 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-6b7486bc6d-lhplk" podUID="a58bb950-bc15-4ca5-9e01-49c1e92fdf24" containerName="heat-engine" containerID="cri-o://26bd332855247aba63d2b87dfac793ecd4ff5bfa351dcd90bc794e8505cbc0fb" gracePeriod=60 Jan 30 07:02:11 crc kubenswrapper[4520]: I0130 07:02:11.181564 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"56675890-6143-418e-a190-6ec302798da2","Type":"ContainerStarted","Data":"39af07e4115352718f61956085a5d9f0b349d4ca26d3e6f5c67be9749c8b90aa"} Jan 30 07:02:11 crc kubenswrapper[4520]: I0130 07:02:11.183599 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"152b98ac-3a20-475d-9e16-b2894c2c8b36","Type":"ContainerStarted","Data":"fba2117246cc40f6576a012020861baf99c4a6f1d73b719d876a4903701cd6da"} Jan 30 07:02:12 crc kubenswrapper[4520]: I0130 07:02:12.249293 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"152b98ac-3a20-475d-9e16-b2894c2c8b36","Type":"ContainerStarted","Data":"7f5d247d1a0032deea2e01aaa50867427617bbe6f8af3518567340286bce4ead"} Jan 30 07:02:12 crc kubenswrapper[4520]: I0130 07:02:12.299885 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd7bb964-36cf-4819-9468-95da06ce8e86","Type":"ContainerStarted","Data":"679c0b1fdc16590c932353130eec83288d16057868cc5f8a0453cc8768093d45"} Jan 30 07:02:12 crc kubenswrapper[4520]: I0130 07:02:12.301570 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.301547656 podStartE2EDuration="4.301547656s" podCreationTimestamp="2026-01-30 07:02:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:02:12.282128337 +0000 UTC m=+1045.910480518" watchObservedRunningTime="2026-01-30 07:02:12.301547656 +0000 UTC m=+1045.929899837" Jan 30 07:02:12 crc kubenswrapper[4520]: I0130 07:02:12.325697 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"56675890-6143-418e-a190-6ec302798da2","Type":"ContainerStarted","Data":"639626e835dc3e4fc5fb93b6d959696ed3d697bb3b9a8bb193c89c6a12a416bf"} Jan 30 07:02:12 crc kubenswrapper[4520]: I0130 07:02:12.341936 4520 generic.go:334] "Generic (PLEG): container finished" podID="61a58f46-d0e7-4ca3-b01d-52758e84d242" containerID="f62c4efbc1b333335644261e8b0a80b2274b946456f82ac0170cbac94157d029" exitCode=0 Jan 30 07:02:12 crc kubenswrapper[4520]: I0130 07:02:12.341986 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrqhk" event={"ID":"61a58f46-d0e7-4ca3-b01d-52758e84d242","Type":"ContainerDied","Data":"f62c4efbc1b333335644261e8b0a80b2274b946456f82ac0170cbac94157d029"} Jan 30 07:02:13 crc kubenswrapper[4520]: I0130 07:02:13.354892 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"56675890-6143-418e-a190-6ec302798da2","Type":"ContainerStarted","Data":"5a4f93ad006989d98a106d2aac05101fc37b5c872f86e9d7b6301df89d09a99e"} Jan 30 07:02:13 crc kubenswrapper[4520]: I0130 07:02:13.357823 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrqhk" event={"ID":"61a58f46-d0e7-4ca3-b01d-52758e84d242","Type":"ContainerStarted","Data":"ea80ad052603ae122075e2fe1162fa26293c1e98ab3c9208d5bc3d2411f01a1a"} Jan 30 07:02:13 crc kubenswrapper[4520]: I0130 07:02:13.359773 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd7bb964-36cf-4819-9468-95da06ce8e86","Type":"ContainerStarted","Data":"bb992e2f2e669fd12bd5e3894867bb86258d4e74262316ef242c6932e09f52dc"} Jan 30 07:02:13 crc kubenswrapper[4520]: I0130 07:02:13.389293 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.389274907 podStartE2EDuration="4.389274907s" podCreationTimestamp="2026-01-30 07:02:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:02:13.386987075 +0000 UTC m=+1047.015339256" watchObservedRunningTime="2026-01-30 07:02:13.389274907 +0000 UTC m=+1047.017627088" Jan 30 07:02:13 crc kubenswrapper[4520]: I0130 07:02:13.410595 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lrqhk" podStartSLOduration=10.201996356 podStartE2EDuration="16.410582697s" podCreationTimestamp="2026-01-30 07:01:57 +0000 UTC" firstStartedPulling="2026-01-30 07:02:06.661579025 +0000 UTC m=+1040.289931206" lastFinishedPulling="2026-01-30 07:02:12.870165366 +0000 UTC m=+1046.498517547" observedRunningTime="2026-01-30 07:02:13.407289735 +0000 UTC m=+1047.035641907" watchObservedRunningTime="2026-01-30 07:02:13.410582697 +0000 UTC m=+1047.038934878" Jan 30 07:02:14 crc kubenswrapper[4520]: E0130 07:02:14.127968 4520 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="26bd332855247aba63d2b87dfac793ecd4ff5bfa351dcd90bc794e8505cbc0fb" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 30 07:02:14 crc kubenswrapper[4520]: E0130 07:02:14.131314 4520 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="26bd332855247aba63d2b87dfac793ecd4ff5bfa351dcd90bc794e8505cbc0fb" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 30 07:02:14 crc kubenswrapper[4520]: E0130 07:02:14.133377 4520 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="26bd332855247aba63d2b87dfac793ecd4ff5bfa351dcd90bc794e8505cbc0fb" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 30 07:02:14 crc kubenswrapper[4520]: E0130 07:02:14.133419 4520 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-6b7486bc6d-lhplk" podUID="a58bb950-bc15-4ca5-9e01-49c1e92fdf24" containerName="heat-engine" Jan 30 07:02:14 crc kubenswrapper[4520]: I0130 07:02:14.370549 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd7bb964-36cf-4819-9468-95da06ce8e86","Type":"ContainerStarted","Data":"9e582196b9e20501c25168dcb54626008d1d29e068d89e5432080387eba2ebed"} Jan 30 07:02:14 crc kubenswrapper[4520]: I0130 07:02:14.370815 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dd7bb964-36cf-4819-9468-95da06ce8e86" containerName="ceilometer-central-agent" containerID="cri-o://ba06a297e6140e0b7c364e07db391210f4c619bdfbd792bec27058564f72390f" gracePeriod=30 Jan 30 07:02:14 crc kubenswrapper[4520]: I0130 07:02:14.371183 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dd7bb964-36cf-4819-9468-95da06ce8e86" containerName="proxy-httpd" containerID="cri-o://9e582196b9e20501c25168dcb54626008d1d29e068d89e5432080387eba2ebed" gracePeriod=30 Jan 30 07:02:14 crc kubenswrapper[4520]: I0130 07:02:14.371198 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dd7bb964-36cf-4819-9468-95da06ce8e86" containerName="sg-core" containerID="cri-o://bb992e2f2e669fd12bd5e3894867bb86258d4e74262316ef242c6932e09f52dc" gracePeriod=30 Jan 30 07:02:14 crc kubenswrapper[4520]: I0130 07:02:14.371208 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dd7bb964-36cf-4819-9468-95da06ce8e86" containerName="ceilometer-notification-agent" containerID="cri-o://679c0b1fdc16590c932353130eec83288d16057868cc5f8a0453cc8768093d45" gracePeriod=30 Jan 30 07:02:14 crc kubenswrapper[4520]: I0130 07:02:14.393293 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=9.852099139 podStartE2EDuration="15.393280104s" podCreationTimestamp="2026-01-30 07:01:59 +0000 UTC" firstStartedPulling="2026-01-30 07:02:08.494775236 +0000 UTC m=+1042.123127417" lastFinishedPulling="2026-01-30 07:02:14.035956202 +0000 UTC m=+1047.664308382" observedRunningTime="2026-01-30 07:02:14.390194975 +0000 UTC m=+1048.018547156" watchObservedRunningTime="2026-01-30 07:02:14.393280104 +0000 UTC m=+1048.021632286" Jan 30 07:02:15 crc kubenswrapper[4520]: I0130 07:02:15.386680 4520 generic.go:334] "Generic (PLEG): container finished" podID="dd7bb964-36cf-4819-9468-95da06ce8e86" containerID="bb992e2f2e669fd12bd5e3894867bb86258d4e74262316ef242c6932e09f52dc" exitCode=2 Jan 30 07:02:15 crc kubenswrapper[4520]: I0130 07:02:15.387079 4520 generic.go:334] "Generic (PLEG): container finished" podID="dd7bb964-36cf-4819-9468-95da06ce8e86" containerID="679c0b1fdc16590c932353130eec83288d16057868cc5f8a0453cc8768093d45" exitCode=0 Jan 30 07:02:15 crc kubenswrapper[4520]: I0130 07:02:15.386766 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd7bb964-36cf-4819-9468-95da06ce8e86","Type":"ContainerDied","Data":"bb992e2f2e669fd12bd5e3894867bb86258d4e74262316ef242c6932e09f52dc"} Jan 30 07:02:15 crc kubenswrapper[4520]: I0130 07:02:15.387136 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd7bb964-36cf-4819-9468-95da06ce8e86","Type":"ContainerDied","Data":"679c0b1fdc16590c932353130eec83288d16057868cc5f8a0453cc8768093d45"} Jan 30 07:02:18 crc kubenswrapper[4520]: I0130 07:02:18.096362 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lrqhk" Jan 30 07:02:18 crc kubenswrapper[4520]: I0130 07:02:18.097065 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lrqhk" Jan 30 07:02:18 crc kubenswrapper[4520]: I0130 07:02:18.751340 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 07:02:18 crc kubenswrapper[4520]: I0130 07:02:18.751761 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 07:02:18 crc kubenswrapper[4520]: I0130 07:02:18.787105 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 07:02:18 crc kubenswrapper[4520]: I0130 07:02:18.789970 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 07:02:19 crc kubenswrapper[4520]: I0130 07:02:19.136694 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lrqhk" podUID="61a58f46-d0e7-4ca3-b01d-52758e84d242" containerName="registry-server" probeResult="failure" output=< Jan 30 07:02:19 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 07:02:19 crc kubenswrapper[4520]: > Jan 30 07:02:19 crc kubenswrapper[4520]: I0130 07:02:19.425901 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 07:02:19 crc kubenswrapper[4520]: I0130 07:02:19.425967 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 07:02:19 crc kubenswrapper[4520]: I0130 07:02:19.503124 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 07:02:19 crc kubenswrapper[4520]: I0130 07:02:19.503489 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 07:02:19 crc kubenswrapper[4520]: I0130 07:02:19.540787 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 07:02:19 crc kubenswrapper[4520]: I0130 07:02:19.556855 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 07:02:20 crc kubenswrapper[4520]: I0130 07:02:20.439606 4520 generic.go:334] "Generic (PLEG): container finished" podID="12c4f381-2282-4c3c-8735-8862b07e65dc" containerID="f536a2a6852db4fc7aa42b4b86986e5ac7f2c01a1c28b5abf5bebf08ece6bc32" exitCode=0 Jan 30 07:02:20 crc kubenswrapper[4520]: I0130 07:02:20.441418 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cvbx4" event={"ID":"12c4f381-2282-4c3c-8735-8862b07e65dc","Type":"ContainerDied","Data":"f536a2a6852db4fc7aa42b4b86986e5ac7f2c01a1c28b5abf5bebf08ece6bc32"} Jan 30 07:02:20 crc kubenswrapper[4520]: I0130 07:02:20.441511 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 07:02:20 crc kubenswrapper[4520]: I0130 07:02:20.442030 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 07:02:21 crc kubenswrapper[4520]: I0130 07:02:21.814342 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cvbx4" Jan 30 07:02:21 crc kubenswrapper[4520]: I0130 07:02:21.970841 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12c4f381-2282-4c3c-8735-8862b07e65dc-combined-ca-bundle\") pod \"12c4f381-2282-4c3c-8735-8862b07e65dc\" (UID: \"12c4f381-2282-4c3c-8735-8862b07e65dc\") " Jan 30 07:02:21 crc kubenswrapper[4520]: I0130 07:02:21.971007 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsf4p\" (UniqueName: \"kubernetes.io/projected/12c4f381-2282-4c3c-8735-8862b07e65dc-kube-api-access-lsf4p\") pod \"12c4f381-2282-4c3c-8735-8862b07e65dc\" (UID: \"12c4f381-2282-4c3c-8735-8862b07e65dc\") " Jan 30 07:02:21 crc kubenswrapper[4520]: I0130 07:02:21.971047 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12c4f381-2282-4c3c-8735-8862b07e65dc-config-data\") pod \"12c4f381-2282-4c3c-8735-8862b07e65dc\" (UID: \"12c4f381-2282-4c3c-8735-8862b07e65dc\") " Jan 30 07:02:21 crc kubenswrapper[4520]: I0130 07:02:21.971077 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12c4f381-2282-4c3c-8735-8862b07e65dc-scripts\") pod \"12c4f381-2282-4c3c-8735-8862b07e65dc\" (UID: \"12c4f381-2282-4c3c-8735-8862b07e65dc\") " Jan 30 07:02:21 crc kubenswrapper[4520]: I0130 07:02:21.981427 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12c4f381-2282-4c3c-8735-8862b07e65dc-kube-api-access-lsf4p" (OuterVolumeSpecName: "kube-api-access-lsf4p") pod "12c4f381-2282-4c3c-8735-8862b07e65dc" (UID: "12c4f381-2282-4c3c-8735-8862b07e65dc"). InnerVolumeSpecName "kube-api-access-lsf4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:02:22 crc kubenswrapper[4520]: I0130 07:02:22.009223 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12c4f381-2282-4c3c-8735-8862b07e65dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12c4f381-2282-4c3c-8735-8862b07e65dc" (UID: "12c4f381-2282-4c3c-8735-8862b07e65dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:02:22 crc kubenswrapper[4520]: I0130 07:02:22.019413 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12c4f381-2282-4c3c-8735-8862b07e65dc-scripts" (OuterVolumeSpecName: "scripts") pod "12c4f381-2282-4c3c-8735-8862b07e65dc" (UID: "12c4f381-2282-4c3c-8735-8862b07e65dc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:02:22 crc kubenswrapper[4520]: I0130 07:02:22.019591 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12c4f381-2282-4c3c-8735-8862b07e65dc-config-data" (OuterVolumeSpecName: "config-data") pod "12c4f381-2282-4c3c-8735-8862b07e65dc" (UID: "12c4f381-2282-4c3c-8735-8862b07e65dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:02:22 crc kubenswrapper[4520]: I0130 07:02:22.073510 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12c4f381-2282-4c3c-8735-8862b07e65dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:22 crc kubenswrapper[4520]: I0130 07:02:22.073843 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsf4p\" (UniqueName: \"kubernetes.io/projected/12c4f381-2282-4c3c-8735-8862b07e65dc-kube-api-access-lsf4p\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:22 crc kubenswrapper[4520]: I0130 07:02:22.073933 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12c4f381-2282-4c3c-8735-8862b07e65dc-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:22 crc kubenswrapper[4520]: I0130 07:02:22.074004 4520 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12c4f381-2282-4c3c-8735-8862b07e65dc-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:22 crc kubenswrapper[4520]: I0130 07:02:22.459687 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cvbx4" Jan 30 07:02:22 crc kubenswrapper[4520]: I0130 07:02:22.459696 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cvbx4" event={"ID":"12c4f381-2282-4c3c-8735-8862b07e65dc","Type":"ContainerDied","Data":"b735566a48a7b707de0ff1b0d7d679b4fdefa9a99920d9f9f1de3665fc38436c"} Jan 30 07:02:22 crc kubenswrapper[4520]: I0130 07:02:22.460493 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b735566a48a7b707de0ff1b0d7d679b4fdefa9a99920d9f9f1de3665fc38436c" Jan 30 07:02:22 crc kubenswrapper[4520]: I0130 07:02:22.464294 4520 generic.go:334] "Generic (PLEG): container finished" podID="dd7bb964-36cf-4819-9468-95da06ce8e86" containerID="ba06a297e6140e0b7c364e07db391210f4c619bdfbd792bec27058564f72390f" exitCode=0 Jan 30 07:02:22 crc kubenswrapper[4520]: I0130 07:02:22.464441 4520 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 07:02:22 crc kubenswrapper[4520]: I0130 07:02:22.464530 4520 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 07:02:22 crc kubenswrapper[4520]: I0130 07:02:22.465645 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd7bb964-36cf-4819-9468-95da06ce8e86","Type":"ContainerDied","Data":"ba06a297e6140e0b7c364e07db391210f4c619bdfbd792bec27058564f72390f"} Jan 30 07:02:22 crc kubenswrapper[4520]: I0130 07:02:22.574557 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 07:02:22 crc kubenswrapper[4520]: E0130 07:02:22.575419 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12c4f381-2282-4c3c-8735-8862b07e65dc" containerName="nova-cell0-conductor-db-sync" Jan 30 07:02:22 crc kubenswrapper[4520]: I0130 07:02:22.575791 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="12c4f381-2282-4c3c-8735-8862b07e65dc" containerName="nova-cell0-conductor-db-sync" Jan 30 07:02:22 crc kubenswrapper[4520]: I0130 07:02:22.576061 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="12c4f381-2282-4c3c-8735-8862b07e65dc" containerName="nova-cell0-conductor-db-sync" Jan 30 07:02:22 crc kubenswrapper[4520]: I0130 07:02:22.576899 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 07:02:22 crc kubenswrapper[4520]: I0130 07:02:22.591617 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 07:02:22 crc kubenswrapper[4520]: I0130 07:02:22.591988 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 07:02:22 crc kubenswrapper[4520]: I0130 07:02:22.592096 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-dtmtd" Jan 30 07:02:22 crc kubenswrapper[4520]: I0130 07:02:22.696017 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f618127c-58ad-486c-8301-87a0f1621727-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f618127c-58ad-486c-8301-87a0f1621727\") " pod="openstack/nova-cell0-conductor-0" Jan 30 07:02:22 crc kubenswrapper[4520]: I0130 07:02:22.696332 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f618127c-58ad-486c-8301-87a0f1621727-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f618127c-58ad-486c-8301-87a0f1621727\") " pod="openstack/nova-cell0-conductor-0" Jan 30 07:02:22 crc kubenswrapper[4520]: I0130 07:02:22.696453 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f57b2\" (UniqueName: \"kubernetes.io/projected/f618127c-58ad-486c-8301-87a0f1621727-kube-api-access-f57b2\") pod \"nova-cell0-conductor-0\" (UID: \"f618127c-58ad-486c-8301-87a0f1621727\") " pod="openstack/nova-cell0-conductor-0" Jan 30 07:02:22 crc kubenswrapper[4520]: I0130 07:02:22.799036 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f618127c-58ad-486c-8301-87a0f1621727-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f618127c-58ad-486c-8301-87a0f1621727\") " pod="openstack/nova-cell0-conductor-0" Jan 30 07:02:22 crc kubenswrapper[4520]: I0130 07:02:22.799406 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f618127c-58ad-486c-8301-87a0f1621727-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f618127c-58ad-486c-8301-87a0f1621727\") " pod="openstack/nova-cell0-conductor-0" Jan 30 07:02:22 crc kubenswrapper[4520]: I0130 07:02:22.799668 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f57b2\" (UniqueName: \"kubernetes.io/projected/f618127c-58ad-486c-8301-87a0f1621727-kube-api-access-f57b2\") pod \"nova-cell0-conductor-0\" (UID: \"f618127c-58ad-486c-8301-87a0f1621727\") " pod="openstack/nova-cell0-conductor-0" Jan 30 07:02:22 crc kubenswrapper[4520]: I0130 07:02:22.811688 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f618127c-58ad-486c-8301-87a0f1621727-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f618127c-58ad-486c-8301-87a0f1621727\") " pod="openstack/nova-cell0-conductor-0" Jan 30 07:02:22 crc kubenswrapper[4520]: I0130 07:02:22.812362 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f618127c-58ad-486c-8301-87a0f1621727-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f618127c-58ad-486c-8301-87a0f1621727\") " pod="openstack/nova-cell0-conductor-0" Jan 30 07:02:22 crc kubenswrapper[4520]: I0130 07:02:22.838335 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f57b2\" (UniqueName: \"kubernetes.io/projected/f618127c-58ad-486c-8301-87a0f1621727-kube-api-access-f57b2\") pod \"nova-cell0-conductor-0\" (UID: \"f618127c-58ad-486c-8301-87a0f1621727\") " pod="openstack/nova-cell0-conductor-0" Jan 30 07:02:22 crc kubenswrapper[4520]: I0130 07:02:22.911959 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 07:02:23 crc kubenswrapper[4520]: I0130 07:02:23.375340 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 07:02:23 crc kubenswrapper[4520]: I0130 07:02:23.375899 4520 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 07:02:23 crc kubenswrapper[4520]: I0130 07:02:23.419207 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 07:02:23 crc kubenswrapper[4520]: I0130 07:02:23.432871 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 07:02:23 crc kubenswrapper[4520]: I0130 07:02:23.480654 4520 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 07:02:23 crc kubenswrapper[4520]: I0130 07:02:23.510261 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 07:02:23 crc kubenswrapper[4520]: I0130 07:02:23.885053 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 07:02:24 crc kubenswrapper[4520]: E0130 07:02:24.122351 4520 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="26bd332855247aba63d2b87dfac793ecd4ff5bfa351dcd90bc794e8505cbc0fb" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 30 07:02:24 crc kubenswrapper[4520]: E0130 07:02:24.139041 4520 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="26bd332855247aba63d2b87dfac793ecd4ff5bfa351dcd90bc794e8505cbc0fb" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 30 07:02:24 crc kubenswrapper[4520]: E0130 07:02:24.149582 4520 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="26bd332855247aba63d2b87dfac793ecd4ff5bfa351dcd90bc794e8505cbc0fb" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 30 07:02:24 crc kubenswrapper[4520]: E0130 07:02:24.149629 4520 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-6b7486bc6d-lhplk" podUID="a58bb950-bc15-4ca5-9e01-49c1e92fdf24" containerName="heat-engine" Jan 30 07:02:24 crc kubenswrapper[4520]: I0130 07:02:24.490398 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f618127c-58ad-486c-8301-87a0f1621727","Type":"ContainerStarted","Data":"b0cdeeae3c3487cfd20b7eccfcce533cbe77fefaa4e93b215ddebf6ab3681f82"} Jan 30 07:02:24 crc kubenswrapper[4520]: I0130 07:02:24.490474 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f618127c-58ad-486c-8301-87a0f1621727","Type":"ContainerStarted","Data":"251883da6571c6cf1f349dad354d12d1a5ad3d420cb761137bba70e545d0c66b"} Jan 30 07:02:24 crc kubenswrapper[4520]: I0130 07:02:24.490701 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 30 07:02:24 crc kubenswrapper[4520]: I0130 07:02:24.507957 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.50794541 podStartE2EDuration="2.50794541s" podCreationTimestamp="2026-01-30 07:02:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:02:24.505923929 +0000 UTC m=+1058.134276110" watchObservedRunningTime="2026-01-30 07:02:24.50794541 +0000 UTC m=+1058.136297590" Jan 30 07:02:26 crc kubenswrapper[4520]: I0130 07:02:26.092537 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 07:02:26 crc kubenswrapper[4520]: I0130 07:02:26.512079 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="f618127c-58ad-486c-8301-87a0f1621727" containerName="nova-cell0-conductor-conductor" containerID="cri-o://b0cdeeae3c3487cfd20b7eccfcce533cbe77fefaa4e93b215ddebf6ab3681f82" gracePeriod=30 Jan 30 07:02:27 crc kubenswrapper[4520]: I0130 07:02:27.039039 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6b7486bc6d-lhplk" Jan 30 07:02:27 crc kubenswrapper[4520]: I0130 07:02:27.102675 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a58bb950-bc15-4ca5-9e01-49c1e92fdf24-config-data-custom\") pod \"a58bb950-bc15-4ca5-9e01-49c1e92fdf24\" (UID: \"a58bb950-bc15-4ca5-9e01-49c1e92fdf24\") " Jan 30 07:02:27 crc kubenswrapper[4520]: I0130 07:02:27.102738 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a58bb950-bc15-4ca5-9e01-49c1e92fdf24-config-data\") pod \"a58bb950-bc15-4ca5-9e01-49c1e92fdf24\" (UID: \"a58bb950-bc15-4ca5-9e01-49c1e92fdf24\") " Jan 30 07:02:27 crc kubenswrapper[4520]: I0130 07:02:27.102790 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a58bb950-bc15-4ca5-9e01-49c1e92fdf24-combined-ca-bundle\") pod \"a58bb950-bc15-4ca5-9e01-49c1e92fdf24\" (UID: \"a58bb950-bc15-4ca5-9e01-49c1e92fdf24\") " Jan 30 07:02:27 crc kubenswrapper[4520]: I0130 07:02:27.103000 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9nfl\" (UniqueName: \"kubernetes.io/projected/a58bb950-bc15-4ca5-9e01-49c1e92fdf24-kube-api-access-n9nfl\") pod \"a58bb950-bc15-4ca5-9e01-49c1e92fdf24\" (UID: \"a58bb950-bc15-4ca5-9e01-49c1e92fdf24\") " Jan 30 07:02:27 crc kubenswrapper[4520]: I0130 07:02:27.110845 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a58bb950-bc15-4ca5-9e01-49c1e92fdf24-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a58bb950-bc15-4ca5-9e01-49c1e92fdf24" (UID: "a58bb950-bc15-4ca5-9e01-49c1e92fdf24"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:02:27 crc kubenswrapper[4520]: I0130 07:02:27.124502 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a58bb950-bc15-4ca5-9e01-49c1e92fdf24-kube-api-access-n9nfl" (OuterVolumeSpecName: "kube-api-access-n9nfl") pod "a58bb950-bc15-4ca5-9e01-49c1e92fdf24" (UID: "a58bb950-bc15-4ca5-9e01-49c1e92fdf24"). InnerVolumeSpecName "kube-api-access-n9nfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:02:27 crc kubenswrapper[4520]: I0130 07:02:27.135702 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a58bb950-bc15-4ca5-9e01-49c1e92fdf24-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a58bb950-bc15-4ca5-9e01-49c1e92fdf24" (UID: "a58bb950-bc15-4ca5-9e01-49c1e92fdf24"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:02:27 crc kubenswrapper[4520]: I0130 07:02:27.171057 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a58bb950-bc15-4ca5-9e01-49c1e92fdf24-config-data" (OuterVolumeSpecName: "config-data") pod "a58bb950-bc15-4ca5-9e01-49c1e92fdf24" (UID: "a58bb950-bc15-4ca5-9e01-49c1e92fdf24"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:02:27 crc kubenswrapper[4520]: I0130 07:02:27.209021 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9nfl\" (UniqueName: \"kubernetes.io/projected/a58bb950-bc15-4ca5-9e01-49c1e92fdf24-kube-api-access-n9nfl\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:27 crc kubenswrapper[4520]: I0130 07:02:27.209328 4520 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a58bb950-bc15-4ca5-9e01-49c1e92fdf24-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:27 crc kubenswrapper[4520]: I0130 07:02:27.209466 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a58bb950-bc15-4ca5-9e01-49c1e92fdf24-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:27 crc kubenswrapper[4520]: I0130 07:02:27.209606 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a58bb950-bc15-4ca5-9e01-49c1e92fdf24-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:27 crc kubenswrapper[4520]: I0130 07:02:27.523411 4520 generic.go:334] "Generic (PLEG): container finished" podID="a58bb950-bc15-4ca5-9e01-49c1e92fdf24" containerID="26bd332855247aba63d2b87dfac793ecd4ff5bfa351dcd90bc794e8505cbc0fb" exitCode=0 Jan 30 07:02:27 crc kubenswrapper[4520]: I0130 07:02:27.523469 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6b7486bc6d-lhplk" event={"ID":"a58bb950-bc15-4ca5-9e01-49c1e92fdf24","Type":"ContainerDied","Data":"26bd332855247aba63d2b87dfac793ecd4ff5bfa351dcd90bc794e8505cbc0fb"} Jan 30 07:02:27 crc kubenswrapper[4520]: I0130 07:02:27.523506 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6b7486bc6d-lhplk" Jan 30 07:02:27 crc kubenswrapper[4520]: I0130 07:02:27.523558 4520 scope.go:117] "RemoveContainer" containerID="26bd332855247aba63d2b87dfac793ecd4ff5bfa351dcd90bc794e8505cbc0fb" Jan 30 07:02:27 crc kubenswrapper[4520]: I0130 07:02:27.523541 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6b7486bc6d-lhplk" event={"ID":"a58bb950-bc15-4ca5-9e01-49c1e92fdf24","Type":"ContainerDied","Data":"8e974300619cb153bc74e091a1d969f6eda605f3feb8e5115b469d944a125c0b"} Jan 30 07:02:27 crc kubenswrapper[4520]: I0130 07:02:27.545374 4520 scope.go:117] "RemoveContainer" containerID="26bd332855247aba63d2b87dfac793ecd4ff5bfa351dcd90bc794e8505cbc0fb" Jan 30 07:02:27 crc kubenswrapper[4520]: E0130 07:02:27.545821 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26bd332855247aba63d2b87dfac793ecd4ff5bfa351dcd90bc794e8505cbc0fb\": container with ID starting with 26bd332855247aba63d2b87dfac793ecd4ff5bfa351dcd90bc794e8505cbc0fb not found: ID does not exist" containerID="26bd332855247aba63d2b87dfac793ecd4ff5bfa351dcd90bc794e8505cbc0fb" Jan 30 07:02:27 crc kubenswrapper[4520]: I0130 07:02:27.545854 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26bd332855247aba63d2b87dfac793ecd4ff5bfa351dcd90bc794e8505cbc0fb"} err="failed to get container status \"26bd332855247aba63d2b87dfac793ecd4ff5bfa351dcd90bc794e8505cbc0fb\": rpc error: code = NotFound desc = could not find container \"26bd332855247aba63d2b87dfac793ecd4ff5bfa351dcd90bc794e8505cbc0fb\": container with ID starting with 26bd332855247aba63d2b87dfac793ecd4ff5bfa351dcd90bc794e8505cbc0fb not found: ID does not exist" Jan 30 07:02:27 crc kubenswrapper[4520]: I0130 07:02:27.557455 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6b7486bc6d-lhplk"] Jan 30 07:02:27 crc kubenswrapper[4520]: I0130 07:02:27.564799 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-6b7486bc6d-lhplk"] Jan 30 07:02:27 crc kubenswrapper[4520]: I0130 07:02:27.793684 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 07:02:27 crc kubenswrapper[4520]: I0130 07:02:27.793776 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 07:02:27 crc kubenswrapper[4520]: I0130 07:02:27.793852 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" Jan 30 07:02:27 crc kubenswrapper[4520]: I0130 07:02:27.795305 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"00188edbc7a901128a316b70d44312dd0aa78297ee86dd9a3630c6ec14392173"} pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 07:02:27 crc kubenswrapper[4520]: I0130 07:02:27.795383 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" containerID="cri-o://00188edbc7a901128a316b70d44312dd0aa78297ee86dd9a3630c6ec14392173" gracePeriod=600 Jan 30 07:02:28 crc kubenswrapper[4520]: I0130 07:02:28.537584 4520 generic.go:334] "Generic (PLEG): container finished" podID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerID="00188edbc7a901128a316b70d44312dd0aa78297ee86dd9a3630c6ec14392173" exitCode=0 Jan 30 07:02:28 crc kubenswrapper[4520]: I0130 07:02:28.537673 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerDied","Data":"00188edbc7a901128a316b70d44312dd0aa78297ee86dd9a3630c6ec14392173"} Jan 30 07:02:28 crc kubenswrapper[4520]: I0130 07:02:28.538064 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerStarted","Data":"3900247724d53d5b578bd24f8556d2a19d19cf1623714d7b59ae28dee17ff16f"} Jan 30 07:02:28 crc kubenswrapper[4520]: I0130 07:02:28.538095 4520 scope.go:117] "RemoveContainer" containerID="23b7c2584fae4db0c5cd58feba27cd2cddcee2416ca541fef55d331d3df60688" Jan 30 07:02:28 crc kubenswrapper[4520]: I0130 07:02:28.695563 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a58bb950-bc15-4ca5-9e01-49c1e92fdf24" path="/var/lib/kubelet/pods/a58bb950-bc15-4ca5-9e01-49c1e92fdf24/volumes" Jan 30 07:02:29 crc kubenswrapper[4520]: I0130 07:02:29.138207 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lrqhk" podUID="61a58f46-d0e7-4ca3-b01d-52758e84d242" containerName="registry-server" probeResult="failure" output=< Jan 30 07:02:29 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 07:02:29 crc kubenswrapper[4520]: > Jan 30 07:02:30 crc kubenswrapper[4520]: I0130 07:02:30.054941 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 07:02:30 crc kubenswrapper[4520]: I0130 07:02:30.063998 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="dd7bb964-36cf-4819-9468-95da06ce8e86" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 07:02:32 crc kubenswrapper[4520]: E0130 07:02:32.915298 4520 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b0cdeeae3c3487cfd20b7eccfcce533cbe77fefaa4e93b215ddebf6ab3681f82" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 07:02:32 crc kubenswrapper[4520]: E0130 07:02:32.919225 4520 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b0cdeeae3c3487cfd20b7eccfcce533cbe77fefaa4e93b215ddebf6ab3681f82" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 07:02:32 crc kubenswrapper[4520]: E0130 07:02:32.921028 4520 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b0cdeeae3c3487cfd20b7eccfcce533cbe77fefaa4e93b215ddebf6ab3681f82" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 07:02:32 crc kubenswrapper[4520]: E0130 07:02:32.921079 4520 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="f618127c-58ad-486c-8301-87a0f1621727" containerName="nova-cell0-conductor-conductor" Jan 30 07:02:37 crc kubenswrapper[4520]: E0130 07:02:37.914415 4520 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b0cdeeae3c3487cfd20b7eccfcce533cbe77fefaa4e93b215ddebf6ab3681f82" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 07:02:37 crc kubenswrapper[4520]: E0130 07:02:37.917206 4520 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b0cdeeae3c3487cfd20b7eccfcce533cbe77fefaa4e93b215ddebf6ab3681f82" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 07:02:37 crc kubenswrapper[4520]: E0130 07:02:37.918381 4520 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b0cdeeae3c3487cfd20b7eccfcce533cbe77fefaa4e93b215ddebf6ab3681f82" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 07:02:37 crc kubenswrapper[4520]: E0130 07:02:37.918485 4520 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="f618127c-58ad-486c-8301-87a0f1621727" containerName="nova-cell0-conductor-conductor" Jan 30 07:02:38 crc kubenswrapper[4520]: I0130 07:02:38.142272 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lrqhk" Jan 30 07:02:38 crc kubenswrapper[4520]: I0130 07:02:38.202871 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lrqhk" Jan 30 07:02:38 crc kubenswrapper[4520]: I0130 07:02:38.393378 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lrqhk"] Jan 30 07:02:39 crc kubenswrapper[4520]: I0130 07:02:39.662753 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lrqhk" podUID="61a58f46-d0e7-4ca3-b01d-52758e84d242" containerName="registry-server" containerID="cri-o://ea80ad052603ae122075e2fe1162fa26293c1e98ab3c9208d5bc3d2411f01a1a" gracePeriod=2 Jan 30 07:02:40 crc kubenswrapper[4520]: I0130 07:02:40.061876 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lrqhk" Jan 30 07:02:40 crc kubenswrapper[4520]: I0130 07:02:40.118673 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a58f46-d0e7-4ca3-b01d-52758e84d242-catalog-content\") pod \"61a58f46-d0e7-4ca3-b01d-52758e84d242\" (UID: \"61a58f46-d0e7-4ca3-b01d-52758e84d242\") " Jan 30 07:02:40 crc kubenswrapper[4520]: I0130 07:02:40.118782 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a58f46-d0e7-4ca3-b01d-52758e84d242-utilities\") pod \"61a58f46-d0e7-4ca3-b01d-52758e84d242\" (UID: \"61a58f46-d0e7-4ca3-b01d-52758e84d242\") " Jan 30 07:02:40 crc kubenswrapper[4520]: I0130 07:02:40.118872 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szvhl\" (UniqueName: \"kubernetes.io/projected/61a58f46-d0e7-4ca3-b01d-52758e84d242-kube-api-access-szvhl\") pod \"61a58f46-d0e7-4ca3-b01d-52758e84d242\" (UID: \"61a58f46-d0e7-4ca3-b01d-52758e84d242\") " Jan 30 07:02:40 crc kubenswrapper[4520]: I0130 07:02:40.120669 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61a58f46-d0e7-4ca3-b01d-52758e84d242-utilities" (OuterVolumeSpecName: "utilities") pod "61a58f46-d0e7-4ca3-b01d-52758e84d242" (UID: "61a58f46-d0e7-4ca3-b01d-52758e84d242"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:02:40 crc kubenswrapper[4520]: I0130 07:02:40.127123 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61a58f46-d0e7-4ca3-b01d-52758e84d242-kube-api-access-szvhl" (OuterVolumeSpecName: "kube-api-access-szvhl") pod "61a58f46-d0e7-4ca3-b01d-52758e84d242" (UID: "61a58f46-d0e7-4ca3-b01d-52758e84d242"). InnerVolumeSpecName "kube-api-access-szvhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:02:40 crc kubenswrapper[4520]: I0130 07:02:40.221711 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a58f46-d0e7-4ca3-b01d-52758e84d242-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:40 crc kubenswrapper[4520]: I0130 07:02:40.221846 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szvhl\" (UniqueName: \"kubernetes.io/projected/61a58f46-d0e7-4ca3-b01d-52758e84d242-kube-api-access-szvhl\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:40 crc kubenswrapper[4520]: I0130 07:02:40.231123 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61a58f46-d0e7-4ca3-b01d-52758e84d242-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61a58f46-d0e7-4ca3-b01d-52758e84d242" (UID: "61a58f46-d0e7-4ca3-b01d-52758e84d242"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:02:40 crc kubenswrapper[4520]: I0130 07:02:40.324937 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a58f46-d0e7-4ca3-b01d-52758e84d242-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:40 crc kubenswrapper[4520]: I0130 07:02:40.699786 4520 generic.go:334] "Generic (PLEG): container finished" podID="61a58f46-d0e7-4ca3-b01d-52758e84d242" containerID="ea80ad052603ae122075e2fe1162fa26293c1e98ab3c9208d5bc3d2411f01a1a" exitCode=0 Jan 30 07:02:40 crc kubenswrapper[4520]: I0130 07:02:40.700113 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lrqhk" Jan 30 07:02:40 crc kubenswrapper[4520]: I0130 07:02:40.705887 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrqhk" event={"ID":"61a58f46-d0e7-4ca3-b01d-52758e84d242","Type":"ContainerDied","Data":"ea80ad052603ae122075e2fe1162fa26293c1e98ab3c9208d5bc3d2411f01a1a"} Jan 30 07:02:40 crc kubenswrapper[4520]: I0130 07:02:40.705951 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrqhk" event={"ID":"61a58f46-d0e7-4ca3-b01d-52758e84d242","Type":"ContainerDied","Data":"1acd1cedb41b23acacdbe8c8eb111bf96cc8bfa3a7a7b62554a2c5147bd92b48"} Jan 30 07:02:40 crc kubenswrapper[4520]: I0130 07:02:40.705983 4520 scope.go:117] "RemoveContainer" containerID="ea80ad052603ae122075e2fe1162fa26293c1e98ab3c9208d5bc3d2411f01a1a" Jan 30 07:02:40 crc kubenswrapper[4520]: I0130 07:02:40.733831 4520 scope.go:117] "RemoveContainer" containerID="f62c4efbc1b333335644261e8b0a80b2274b946456f82ac0170cbac94157d029" Jan 30 07:02:40 crc kubenswrapper[4520]: I0130 07:02:40.764611 4520 scope.go:117] "RemoveContainer" containerID="19f5731f0fdfb18c38c8ab106f065a89a9e0d9069edcf33cabbd9c07f9df1fc0" Jan 30 07:02:40 crc kubenswrapper[4520]: I0130 07:02:40.800409 4520 scope.go:117] "RemoveContainer" containerID="ea80ad052603ae122075e2fe1162fa26293c1e98ab3c9208d5bc3d2411f01a1a" Jan 30 07:02:40 crc kubenswrapper[4520]: E0130 07:02:40.800976 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea80ad052603ae122075e2fe1162fa26293c1e98ab3c9208d5bc3d2411f01a1a\": container with ID starting with ea80ad052603ae122075e2fe1162fa26293c1e98ab3c9208d5bc3d2411f01a1a not found: ID does not exist" containerID="ea80ad052603ae122075e2fe1162fa26293c1e98ab3c9208d5bc3d2411f01a1a" Jan 30 07:02:40 crc kubenswrapper[4520]: I0130 07:02:40.801012 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea80ad052603ae122075e2fe1162fa26293c1e98ab3c9208d5bc3d2411f01a1a"} err="failed to get container status \"ea80ad052603ae122075e2fe1162fa26293c1e98ab3c9208d5bc3d2411f01a1a\": rpc error: code = NotFound desc = could not find container \"ea80ad052603ae122075e2fe1162fa26293c1e98ab3c9208d5bc3d2411f01a1a\": container with ID starting with ea80ad052603ae122075e2fe1162fa26293c1e98ab3c9208d5bc3d2411f01a1a not found: ID does not exist" Jan 30 07:02:40 crc kubenswrapper[4520]: I0130 07:02:40.801043 4520 scope.go:117] "RemoveContainer" containerID="f62c4efbc1b333335644261e8b0a80b2274b946456f82ac0170cbac94157d029" Jan 30 07:02:40 crc kubenswrapper[4520]: E0130 07:02:40.801385 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f62c4efbc1b333335644261e8b0a80b2274b946456f82ac0170cbac94157d029\": container with ID starting with f62c4efbc1b333335644261e8b0a80b2274b946456f82ac0170cbac94157d029 not found: ID does not exist" containerID="f62c4efbc1b333335644261e8b0a80b2274b946456f82ac0170cbac94157d029" Jan 30 07:02:40 crc kubenswrapper[4520]: I0130 07:02:40.801412 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f62c4efbc1b333335644261e8b0a80b2274b946456f82ac0170cbac94157d029"} err="failed to get container status \"f62c4efbc1b333335644261e8b0a80b2274b946456f82ac0170cbac94157d029\": rpc error: code = NotFound desc = could not find container \"f62c4efbc1b333335644261e8b0a80b2274b946456f82ac0170cbac94157d029\": container with ID starting with f62c4efbc1b333335644261e8b0a80b2274b946456f82ac0170cbac94157d029 not found: ID does not exist" Jan 30 07:02:40 crc kubenswrapper[4520]: I0130 07:02:40.801430 4520 scope.go:117] "RemoveContainer" containerID="19f5731f0fdfb18c38c8ab106f065a89a9e0d9069edcf33cabbd9c07f9df1fc0" Jan 30 07:02:40 crc kubenswrapper[4520]: E0130 07:02:40.801813 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19f5731f0fdfb18c38c8ab106f065a89a9e0d9069edcf33cabbd9c07f9df1fc0\": container with ID starting with 19f5731f0fdfb18c38c8ab106f065a89a9e0d9069edcf33cabbd9c07f9df1fc0 not found: ID does not exist" containerID="19f5731f0fdfb18c38c8ab106f065a89a9e0d9069edcf33cabbd9c07f9df1fc0" Jan 30 07:02:40 crc kubenswrapper[4520]: I0130 07:02:40.801832 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19f5731f0fdfb18c38c8ab106f065a89a9e0d9069edcf33cabbd9c07f9df1fc0"} err="failed to get container status \"19f5731f0fdfb18c38c8ab106f065a89a9e0d9069edcf33cabbd9c07f9df1fc0\": rpc error: code = NotFound desc = could not find container \"19f5731f0fdfb18c38c8ab106f065a89a9e0d9069edcf33cabbd9c07f9df1fc0\": container with ID starting with 19f5731f0fdfb18c38c8ab106f065a89a9e0d9069edcf33cabbd9c07f9df1fc0 not found: ID does not exist" Jan 30 07:02:42 crc kubenswrapper[4520]: E0130 07:02:42.914932 4520 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b0cdeeae3c3487cfd20b7eccfcce533cbe77fefaa4e93b215ddebf6ab3681f82" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 07:02:42 crc kubenswrapper[4520]: E0130 07:02:42.917049 4520 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b0cdeeae3c3487cfd20b7eccfcce533cbe77fefaa4e93b215ddebf6ab3681f82" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 07:02:42 crc kubenswrapper[4520]: E0130 07:02:42.919023 4520 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b0cdeeae3c3487cfd20b7eccfcce533cbe77fefaa4e93b215ddebf6ab3681f82" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 07:02:42 crc kubenswrapper[4520]: E0130 07:02:42.919121 4520 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="f618127c-58ad-486c-8301-87a0f1621727" containerName="nova-cell0-conductor-conductor" Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.724979 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.738625 4520 generic.go:334] "Generic (PLEG): container finished" podID="dd7bb964-36cf-4819-9468-95da06ce8e86" containerID="9e582196b9e20501c25168dcb54626008d1d29e068d89e5432080387eba2ebed" exitCode=137 Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.738710 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd7bb964-36cf-4819-9468-95da06ce8e86","Type":"ContainerDied","Data":"9e582196b9e20501c25168dcb54626008d1d29e068d89e5432080387eba2ebed"} Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.738737 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.738776 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd7bb964-36cf-4819-9468-95da06ce8e86","Type":"ContainerDied","Data":"64d3bc2c53bb85f622233e6ed311cd0d4f060042205fe14393c384e51a261e5b"} Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.738804 4520 scope.go:117] "RemoveContainer" containerID="9e582196b9e20501c25168dcb54626008d1d29e068d89e5432080387eba2ebed" Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.770216 4520 scope.go:117] "RemoveContainer" containerID="bb992e2f2e669fd12bd5e3894867bb86258d4e74262316ef242c6932e09f52dc" Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.789789 4520 scope.go:117] "RemoveContainer" containerID="679c0b1fdc16590c932353130eec83288d16057868cc5f8a0453cc8768093d45" Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.810557 4520 scope.go:117] "RemoveContainer" containerID="ba06a297e6140e0b7c364e07db391210f4c619bdfbd792bec27058564f72390f" Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.821242 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd7bb964-36cf-4819-9468-95da06ce8e86-combined-ca-bundle\") pod \"dd7bb964-36cf-4819-9468-95da06ce8e86\" (UID: \"dd7bb964-36cf-4819-9468-95da06ce8e86\") " Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.821378 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd7bb964-36cf-4819-9468-95da06ce8e86-config-data\") pod \"dd7bb964-36cf-4819-9468-95da06ce8e86\" (UID: \"dd7bb964-36cf-4819-9468-95da06ce8e86\") " Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.821589 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd7bb964-36cf-4819-9468-95da06ce8e86-log-httpd\") pod \"dd7bb964-36cf-4819-9468-95da06ce8e86\" (UID: \"dd7bb964-36cf-4819-9468-95da06ce8e86\") " Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.821691 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8ktn\" (UniqueName: \"kubernetes.io/projected/dd7bb964-36cf-4819-9468-95da06ce8e86-kube-api-access-d8ktn\") pod \"dd7bb964-36cf-4819-9468-95da06ce8e86\" (UID: \"dd7bb964-36cf-4819-9468-95da06ce8e86\") " Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.821729 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd7bb964-36cf-4819-9468-95da06ce8e86-sg-core-conf-yaml\") pod \"dd7bb964-36cf-4819-9468-95da06ce8e86\" (UID: \"dd7bb964-36cf-4819-9468-95da06ce8e86\") " Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.821858 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd7bb964-36cf-4819-9468-95da06ce8e86-run-httpd\") pod \"dd7bb964-36cf-4819-9468-95da06ce8e86\" (UID: \"dd7bb964-36cf-4819-9468-95da06ce8e86\") " Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.821879 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd7bb964-36cf-4819-9468-95da06ce8e86-scripts\") pod \"dd7bb964-36cf-4819-9468-95da06ce8e86\" (UID: \"dd7bb964-36cf-4819-9468-95da06ce8e86\") " Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.823064 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd7bb964-36cf-4819-9468-95da06ce8e86-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "dd7bb964-36cf-4819-9468-95da06ce8e86" (UID: "dd7bb964-36cf-4819-9468-95da06ce8e86"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.824164 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd7bb964-36cf-4819-9468-95da06ce8e86-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "dd7bb964-36cf-4819-9468-95da06ce8e86" (UID: "dd7bb964-36cf-4819-9468-95da06ce8e86"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.832410 4520 scope.go:117] "RemoveContainer" containerID="9e582196b9e20501c25168dcb54626008d1d29e068d89e5432080387eba2ebed" Jan 30 07:02:44 crc kubenswrapper[4520]: E0130 07:02:44.832861 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e582196b9e20501c25168dcb54626008d1d29e068d89e5432080387eba2ebed\": container with ID starting with 9e582196b9e20501c25168dcb54626008d1d29e068d89e5432080387eba2ebed not found: ID does not exist" containerID="9e582196b9e20501c25168dcb54626008d1d29e068d89e5432080387eba2ebed" Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.832915 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e582196b9e20501c25168dcb54626008d1d29e068d89e5432080387eba2ebed"} err="failed to get container status \"9e582196b9e20501c25168dcb54626008d1d29e068d89e5432080387eba2ebed\": rpc error: code = NotFound desc = could not find container \"9e582196b9e20501c25168dcb54626008d1d29e068d89e5432080387eba2ebed\": container with ID starting with 9e582196b9e20501c25168dcb54626008d1d29e068d89e5432080387eba2ebed not found: ID does not exist" Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.832949 4520 scope.go:117] "RemoveContainer" containerID="bb992e2f2e669fd12bd5e3894867bb86258d4e74262316ef242c6932e09f52dc" Jan 30 07:02:44 crc kubenswrapper[4520]: E0130 07:02:44.833481 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb992e2f2e669fd12bd5e3894867bb86258d4e74262316ef242c6932e09f52dc\": container with ID starting with bb992e2f2e669fd12bd5e3894867bb86258d4e74262316ef242c6932e09f52dc not found: ID does not exist" containerID="bb992e2f2e669fd12bd5e3894867bb86258d4e74262316ef242c6932e09f52dc" Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.833631 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb992e2f2e669fd12bd5e3894867bb86258d4e74262316ef242c6932e09f52dc"} err="failed to get container status \"bb992e2f2e669fd12bd5e3894867bb86258d4e74262316ef242c6932e09f52dc\": rpc error: code = NotFound desc = could not find container \"bb992e2f2e669fd12bd5e3894867bb86258d4e74262316ef242c6932e09f52dc\": container with ID starting with bb992e2f2e669fd12bd5e3894867bb86258d4e74262316ef242c6932e09f52dc not found: ID does not exist" Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.833748 4520 scope.go:117] "RemoveContainer" containerID="679c0b1fdc16590c932353130eec83288d16057868cc5f8a0453cc8768093d45" Jan 30 07:02:44 crc kubenswrapper[4520]: E0130 07:02:44.834113 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"679c0b1fdc16590c932353130eec83288d16057868cc5f8a0453cc8768093d45\": container with ID starting with 679c0b1fdc16590c932353130eec83288d16057868cc5f8a0453cc8768093d45 not found: ID does not exist" containerID="679c0b1fdc16590c932353130eec83288d16057868cc5f8a0453cc8768093d45" Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.834147 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"679c0b1fdc16590c932353130eec83288d16057868cc5f8a0453cc8768093d45"} err="failed to get container status \"679c0b1fdc16590c932353130eec83288d16057868cc5f8a0453cc8768093d45\": rpc error: code = NotFound desc = could not find container \"679c0b1fdc16590c932353130eec83288d16057868cc5f8a0453cc8768093d45\": container with ID starting with 679c0b1fdc16590c932353130eec83288d16057868cc5f8a0453cc8768093d45 not found: ID does not exist" Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.834196 4520 scope.go:117] "RemoveContainer" containerID="ba06a297e6140e0b7c364e07db391210f4c619bdfbd792bec27058564f72390f" Jan 30 07:02:44 crc kubenswrapper[4520]: E0130 07:02:44.834479 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba06a297e6140e0b7c364e07db391210f4c619bdfbd792bec27058564f72390f\": container with ID starting with ba06a297e6140e0b7c364e07db391210f4c619bdfbd792bec27058564f72390f not found: ID does not exist" containerID="ba06a297e6140e0b7c364e07db391210f4c619bdfbd792bec27058564f72390f" Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.834506 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba06a297e6140e0b7c364e07db391210f4c619bdfbd792bec27058564f72390f"} err="failed to get container status \"ba06a297e6140e0b7c364e07db391210f4c619bdfbd792bec27058564f72390f\": rpc error: code = NotFound desc = could not find container \"ba06a297e6140e0b7c364e07db391210f4c619bdfbd792bec27058564f72390f\": container with ID starting with ba06a297e6140e0b7c364e07db391210f4c619bdfbd792bec27058564f72390f not found: ID does not exist" Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.841418 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd7bb964-36cf-4819-9468-95da06ce8e86-kube-api-access-d8ktn" (OuterVolumeSpecName: "kube-api-access-d8ktn") pod "dd7bb964-36cf-4819-9468-95da06ce8e86" (UID: "dd7bb964-36cf-4819-9468-95da06ce8e86"). InnerVolumeSpecName "kube-api-access-d8ktn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.847881 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd7bb964-36cf-4819-9468-95da06ce8e86-scripts" (OuterVolumeSpecName: "scripts") pod "dd7bb964-36cf-4819-9468-95da06ce8e86" (UID: "dd7bb964-36cf-4819-9468-95da06ce8e86"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.852716 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd7bb964-36cf-4819-9468-95da06ce8e86-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dd7bb964-36cf-4819-9468-95da06ce8e86" (UID: "dd7bb964-36cf-4819-9468-95da06ce8e86"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.892679 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd7bb964-36cf-4819-9468-95da06ce8e86-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd7bb964-36cf-4819-9468-95da06ce8e86" (UID: "dd7bb964-36cf-4819-9468-95da06ce8e86"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.910864 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd7bb964-36cf-4819-9468-95da06ce8e86-config-data" (OuterVolumeSpecName: "config-data") pod "dd7bb964-36cf-4819-9468-95da06ce8e86" (UID: "dd7bb964-36cf-4819-9468-95da06ce8e86"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.925452 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd7bb964-36cf-4819-9468-95da06ce8e86-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.925595 4520 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd7bb964-36cf-4819-9468-95da06ce8e86-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.925688 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8ktn\" (UniqueName: \"kubernetes.io/projected/dd7bb964-36cf-4819-9468-95da06ce8e86-kube-api-access-d8ktn\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.925760 4520 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd7bb964-36cf-4819-9468-95da06ce8e86-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.925827 4520 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd7bb964-36cf-4819-9468-95da06ce8e86-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.925897 4520 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd7bb964-36cf-4819-9468-95da06ce8e86-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:44 crc kubenswrapper[4520]: I0130 07:02:44.925956 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd7bb964-36cf-4819-9468-95da06ce8e86-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.067079 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.073998 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.103530 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:02:45 crc kubenswrapper[4520]: E0130 07:02:45.103968 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd7bb964-36cf-4819-9468-95da06ce8e86" containerName="proxy-httpd" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.103993 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd7bb964-36cf-4819-9468-95da06ce8e86" containerName="proxy-httpd" Jan 30 07:02:45 crc kubenswrapper[4520]: E0130 07:02:45.104011 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd7bb964-36cf-4819-9468-95da06ce8e86" containerName="ceilometer-central-agent" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.104017 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd7bb964-36cf-4819-9468-95da06ce8e86" containerName="ceilometer-central-agent" Jan 30 07:02:45 crc kubenswrapper[4520]: E0130 07:02:45.104033 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd7bb964-36cf-4819-9468-95da06ce8e86" containerName="sg-core" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.104039 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd7bb964-36cf-4819-9468-95da06ce8e86" containerName="sg-core" Jan 30 07:02:45 crc kubenswrapper[4520]: E0130 07:02:45.104048 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a58bb950-bc15-4ca5-9e01-49c1e92fdf24" containerName="heat-engine" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.104054 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="a58bb950-bc15-4ca5-9e01-49c1e92fdf24" containerName="heat-engine" Jan 30 07:02:45 crc kubenswrapper[4520]: E0130 07:02:45.104066 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd7bb964-36cf-4819-9468-95da06ce8e86" containerName="ceilometer-notification-agent" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.104072 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd7bb964-36cf-4819-9468-95da06ce8e86" containerName="ceilometer-notification-agent" Jan 30 07:02:45 crc kubenswrapper[4520]: E0130 07:02:45.104083 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a58f46-d0e7-4ca3-b01d-52758e84d242" containerName="extract-utilities" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.104088 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a58f46-d0e7-4ca3-b01d-52758e84d242" containerName="extract-utilities" Jan 30 07:02:45 crc kubenswrapper[4520]: E0130 07:02:45.104099 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a58f46-d0e7-4ca3-b01d-52758e84d242" containerName="registry-server" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.104104 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a58f46-d0e7-4ca3-b01d-52758e84d242" containerName="registry-server" Jan 30 07:02:45 crc kubenswrapper[4520]: E0130 07:02:45.104124 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a58f46-d0e7-4ca3-b01d-52758e84d242" containerName="extract-content" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.104131 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a58f46-d0e7-4ca3-b01d-52758e84d242" containerName="extract-content" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.104297 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd7bb964-36cf-4819-9468-95da06ce8e86" containerName="proxy-httpd" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.104313 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="61a58f46-d0e7-4ca3-b01d-52758e84d242" containerName="registry-server" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.104324 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd7bb964-36cf-4819-9468-95da06ce8e86" containerName="ceilometer-notification-agent" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.104333 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd7bb964-36cf-4819-9468-95da06ce8e86" containerName="sg-core" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.104344 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="a58bb950-bc15-4ca5-9e01-49c1e92fdf24" containerName="heat-engine" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.104354 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd7bb964-36cf-4819-9468-95da06ce8e86" containerName="ceilometer-central-agent" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.105915 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.111445 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.120071 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.136167 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd728108-debc-4baa-8a2d-b82733e5976a-config-data\") pod \"ceilometer-0\" (UID: \"fd728108-debc-4baa-8a2d-b82733e5976a\") " pod="openstack/ceilometer-0" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.136333 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd728108-debc-4baa-8a2d-b82733e5976a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd728108-debc-4baa-8a2d-b82733e5976a\") " pod="openstack/ceilometer-0" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.136370 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd728108-debc-4baa-8a2d-b82733e5976a-run-httpd\") pod \"ceilometer-0\" (UID: \"fd728108-debc-4baa-8a2d-b82733e5976a\") " pod="openstack/ceilometer-0" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.136443 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd728108-debc-4baa-8a2d-b82733e5976a-log-httpd\") pod \"ceilometer-0\" (UID: \"fd728108-debc-4baa-8a2d-b82733e5976a\") " pod="openstack/ceilometer-0" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.136474 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd728108-debc-4baa-8a2d-b82733e5976a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd728108-debc-4baa-8a2d-b82733e5976a\") " pod="openstack/ceilometer-0" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.136510 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz7qf\" (UniqueName: \"kubernetes.io/projected/fd728108-debc-4baa-8a2d-b82733e5976a-kube-api-access-nz7qf\") pod \"ceilometer-0\" (UID: \"fd728108-debc-4baa-8a2d-b82733e5976a\") " pod="openstack/ceilometer-0" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.136622 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd728108-debc-4baa-8a2d-b82733e5976a-scripts\") pod \"ceilometer-0\" (UID: \"fd728108-debc-4baa-8a2d-b82733e5976a\") " pod="openstack/ceilometer-0" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.140832 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.239375 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd728108-debc-4baa-8a2d-b82733e5976a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd728108-debc-4baa-8a2d-b82733e5976a\") " pod="openstack/ceilometer-0" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.239836 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd728108-debc-4baa-8a2d-b82733e5976a-run-httpd\") pod \"ceilometer-0\" (UID: \"fd728108-debc-4baa-8a2d-b82733e5976a\") " pod="openstack/ceilometer-0" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.239975 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd728108-debc-4baa-8a2d-b82733e5976a-log-httpd\") pod \"ceilometer-0\" (UID: \"fd728108-debc-4baa-8a2d-b82733e5976a\") " pod="openstack/ceilometer-0" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.240009 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd728108-debc-4baa-8a2d-b82733e5976a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd728108-debc-4baa-8a2d-b82733e5976a\") " pod="openstack/ceilometer-0" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.240052 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz7qf\" (UniqueName: \"kubernetes.io/projected/fd728108-debc-4baa-8a2d-b82733e5976a-kube-api-access-nz7qf\") pod \"ceilometer-0\" (UID: \"fd728108-debc-4baa-8a2d-b82733e5976a\") " pod="openstack/ceilometer-0" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.240151 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd728108-debc-4baa-8a2d-b82733e5976a-scripts\") pod \"ceilometer-0\" (UID: \"fd728108-debc-4baa-8a2d-b82733e5976a\") " pod="openstack/ceilometer-0" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.240196 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd728108-debc-4baa-8a2d-b82733e5976a-config-data\") pod \"ceilometer-0\" (UID: \"fd728108-debc-4baa-8a2d-b82733e5976a\") " pod="openstack/ceilometer-0" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.240781 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd728108-debc-4baa-8a2d-b82733e5976a-log-httpd\") pod \"ceilometer-0\" (UID: \"fd728108-debc-4baa-8a2d-b82733e5976a\") " pod="openstack/ceilometer-0" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.241226 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd728108-debc-4baa-8a2d-b82733e5976a-run-httpd\") pod \"ceilometer-0\" (UID: \"fd728108-debc-4baa-8a2d-b82733e5976a\") " pod="openstack/ceilometer-0" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.245058 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd728108-debc-4baa-8a2d-b82733e5976a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd728108-debc-4baa-8a2d-b82733e5976a\") " pod="openstack/ceilometer-0" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.253261 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd728108-debc-4baa-8a2d-b82733e5976a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd728108-debc-4baa-8a2d-b82733e5976a\") " pod="openstack/ceilometer-0" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.253263 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd728108-debc-4baa-8a2d-b82733e5976a-scripts\") pod \"ceilometer-0\" (UID: \"fd728108-debc-4baa-8a2d-b82733e5976a\") " pod="openstack/ceilometer-0" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.254619 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd728108-debc-4baa-8a2d-b82733e5976a-config-data\") pod \"ceilometer-0\" (UID: \"fd728108-debc-4baa-8a2d-b82733e5976a\") " pod="openstack/ceilometer-0" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.255624 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz7qf\" (UniqueName: \"kubernetes.io/projected/fd728108-debc-4baa-8a2d-b82733e5976a-kube-api-access-nz7qf\") pod \"ceilometer-0\" (UID: \"fd728108-debc-4baa-8a2d-b82733e5976a\") " pod="openstack/ceilometer-0" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.438316 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 07:02:45 crc kubenswrapper[4520]: I0130 07:02:45.902298 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:02:46 crc kubenswrapper[4520]: I0130 07:02:46.697111 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd7bb964-36cf-4819-9468-95da06ce8e86" path="/var/lib/kubelet/pods/dd7bb964-36cf-4819-9468-95da06ce8e86/volumes" Jan 30 07:02:46 crc kubenswrapper[4520]: I0130 07:02:46.773480 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd728108-debc-4baa-8a2d-b82733e5976a","Type":"ContainerStarted","Data":"e1f080a10a66b1b3dc3ea1e67e08a09bdb6e76dabd0c4a0328e7b666de664701"} Jan 30 07:02:46 crc kubenswrapper[4520]: I0130 07:02:46.773580 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd728108-debc-4baa-8a2d-b82733e5976a","Type":"ContainerStarted","Data":"d8cdc16e3c8db2888dbfd66f0c61f55bccfd98a17c8984172b2989703d2c2a38"} Jan 30 07:02:47 crc kubenswrapper[4520]: I0130 07:02:47.786057 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd728108-debc-4baa-8a2d-b82733e5976a","Type":"ContainerStarted","Data":"7b59a1adbf7b5535f0afda8ddf36ac7a9ab337c330af70bc47c33af8ad631f1c"} Jan 30 07:02:47 crc kubenswrapper[4520]: E0130 07:02:47.914735 4520 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b0cdeeae3c3487cfd20b7eccfcce533cbe77fefaa4e93b215ddebf6ab3681f82" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 07:02:47 crc kubenswrapper[4520]: E0130 07:02:47.916271 4520 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b0cdeeae3c3487cfd20b7eccfcce533cbe77fefaa4e93b215ddebf6ab3681f82" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 07:02:47 crc kubenswrapper[4520]: E0130 07:02:47.917705 4520 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b0cdeeae3c3487cfd20b7eccfcce533cbe77fefaa4e93b215ddebf6ab3681f82" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 07:02:47 crc kubenswrapper[4520]: E0130 07:02:47.917758 4520 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="f618127c-58ad-486c-8301-87a0f1621727" containerName="nova-cell0-conductor-conductor" Jan 30 07:02:48 crc kubenswrapper[4520]: I0130 07:02:48.820268 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd728108-debc-4baa-8a2d-b82733e5976a","Type":"ContainerStarted","Data":"9ddc6beb0373bd8aeb239ad6737680c1d693e35932c291c632094400fe8ce7d4"} Jan 30 07:02:50 crc kubenswrapper[4520]: I0130 07:02:50.839852 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd728108-debc-4baa-8a2d-b82733e5976a","Type":"ContainerStarted","Data":"18c9cfbcf1eaafc8c67163a0eaefa32a263a0e085206b695d46fd1e64342a6d5"} Jan 30 07:02:50 crc kubenswrapper[4520]: I0130 07:02:50.840506 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 07:02:50 crc kubenswrapper[4520]: I0130 07:02:50.868954 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.548535721 podStartE2EDuration="5.86893854s" podCreationTimestamp="2026-01-30 07:02:45 +0000 UTC" firstStartedPulling="2026-01-30 07:02:45.90768119 +0000 UTC m=+1079.536033372" lastFinishedPulling="2026-01-30 07:02:50.228084011 +0000 UTC m=+1083.856436191" observedRunningTime="2026-01-30 07:02:50.864384879 +0000 UTC m=+1084.492737061" watchObservedRunningTime="2026-01-30 07:02:50.86893854 +0000 UTC m=+1084.497290721" Jan 30 07:02:52 crc kubenswrapper[4520]: E0130 07:02:52.914357 4520 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b0cdeeae3c3487cfd20b7eccfcce533cbe77fefaa4e93b215ddebf6ab3681f82" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 07:02:52 crc kubenswrapper[4520]: E0130 07:02:52.917029 4520 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b0cdeeae3c3487cfd20b7eccfcce533cbe77fefaa4e93b215ddebf6ab3681f82" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 07:02:52 crc kubenswrapper[4520]: E0130 07:02:52.918242 4520 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b0cdeeae3c3487cfd20b7eccfcce533cbe77fefaa4e93b215ddebf6ab3681f82" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 07:02:52 crc kubenswrapper[4520]: E0130 07:02:52.918308 4520 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="f618127c-58ad-486c-8301-87a0f1621727" containerName="nova-cell0-conductor-conductor" Jan 30 07:02:56 crc kubenswrapper[4520]: I0130 07:02:56.846551 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 07:02:56 crc kubenswrapper[4520]: I0130 07:02:56.878785 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f618127c-58ad-486c-8301-87a0f1621727-combined-ca-bundle\") pod \"f618127c-58ad-486c-8301-87a0f1621727\" (UID: \"f618127c-58ad-486c-8301-87a0f1621727\") " Jan 30 07:02:56 crc kubenswrapper[4520]: I0130 07:02:56.878879 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f57b2\" (UniqueName: \"kubernetes.io/projected/f618127c-58ad-486c-8301-87a0f1621727-kube-api-access-f57b2\") pod \"f618127c-58ad-486c-8301-87a0f1621727\" (UID: \"f618127c-58ad-486c-8301-87a0f1621727\") " Jan 30 07:02:56 crc kubenswrapper[4520]: I0130 07:02:56.878963 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f618127c-58ad-486c-8301-87a0f1621727-config-data\") pod \"f618127c-58ad-486c-8301-87a0f1621727\" (UID: \"f618127c-58ad-486c-8301-87a0f1621727\") " Jan 30 07:02:56 crc kubenswrapper[4520]: I0130 07:02:56.907547 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f618127c-58ad-486c-8301-87a0f1621727-kube-api-access-f57b2" (OuterVolumeSpecName: "kube-api-access-f57b2") pod "f618127c-58ad-486c-8301-87a0f1621727" (UID: "f618127c-58ad-486c-8301-87a0f1621727"). InnerVolumeSpecName "kube-api-access-f57b2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:02:56 crc kubenswrapper[4520]: I0130 07:02:56.918585 4520 generic.go:334] "Generic (PLEG): container finished" podID="f618127c-58ad-486c-8301-87a0f1621727" containerID="b0cdeeae3c3487cfd20b7eccfcce533cbe77fefaa4e93b215ddebf6ab3681f82" exitCode=137 Jan 30 07:02:56 crc kubenswrapper[4520]: I0130 07:02:56.918656 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f618127c-58ad-486c-8301-87a0f1621727","Type":"ContainerDied","Data":"b0cdeeae3c3487cfd20b7eccfcce533cbe77fefaa4e93b215ddebf6ab3681f82"} Jan 30 07:02:56 crc kubenswrapper[4520]: I0130 07:02:56.918703 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f618127c-58ad-486c-8301-87a0f1621727","Type":"ContainerDied","Data":"251883da6571c6cf1f349dad354d12d1a5ad3d420cb761137bba70e545d0c66b"} Jan 30 07:02:56 crc kubenswrapper[4520]: I0130 07:02:56.918675 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 07:02:56 crc kubenswrapper[4520]: I0130 07:02:56.918724 4520 scope.go:117] "RemoveContainer" containerID="b0cdeeae3c3487cfd20b7eccfcce533cbe77fefaa4e93b215ddebf6ab3681f82" Jan 30 07:02:56 crc kubenswrapper[4520]: I0130 07:02:56.925681 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f618127c-58ad-486c-8301-87a0f1621727-config-data" (OuterVolumeSpecName: "config-data") pod "f618127c-58ad-486c-8301-87a0f1621727" (UID: "f618127c-58ad-486c-8301-87a0f1621727"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:02:56 crc kubenswrapper[4520]: I0130 07:02:56.927678 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f618127c-58ad-486c-8301-87a0f1621727-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f618127c-58ad-486c-8301-87a0f1621727" (UID: "f618127c-58ad-486c-8301-87a0f1621727"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:02:56 crc kubenswrapper[4520]: I0130 07:02:56.968376 4520 scope.go:117] "RemoveContainer" containerID="b0cdeeae3c3487cfd20b7eccfcce533cbe77fefaa4e93b215ddebf6ab3681f82" Jan 30 07:02:56 crc kubenswrapper[4520]: E0130 07:02:56.969109 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0cdeeae3c3487cfd20b7eccfcce533cbe77fefaa4e93b215ddebf6ab3681f82\": container with ID starting with b0cdeeae3c3487cfd20b7eccfcce533cbe77fefaa4e93b215ddebf6ab3681f82 not found: ID does not exist" containerID="b0cdeeae3c3487cfd20b7eccfcce533cbe77fefaa4e93b215ddebf6ab3681f82" Jan 30 07:02:56 crc kubenswrapper[4520]: I0130 07:02:56.969145 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0cdeeae3c3487cfd20b7eccfcce533cbe77fefaa4e93b215ddebf6ab3681f82"} err="failed to get container status \"b0cdeeae3c3487cfd20b7eccfcce533cbe77fefaa4e93b215ddebf6ab3681f82\": rpc error: code = NotFound desc = could not find container \"b0cdeeae3c3487cfd20b7eccfcce533cbe77fefaa4e93b215ddebf6ab3681f82\": container with ID starting with b0cdeeae3c3487cfd20b7eccfcce533cbe77fefaa4e93b215ddebf6ab3681f82 not found: ID does not exist" Jan 30 07:02:56 crc kubenswrapper[4520]: I0130 07:02:56.986395 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f618127c-58ad-486c-8301-87a0f1621727-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:56 crc kubenswrapper[4520]: I0130 07:02:56.986553 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f57b2\" (UniqueName: \"kubernetes.io/projected/f618127c-58ad-486c-8301-87a0f1621727-kube-api-access-f57b2\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:56 crc kubenswrapper[4520]: I0130 07:02:56.986630 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f618127c-58ad-486c-8301-87a0f1621727-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:02:57 crc kubenswrapper[4520]: I0130 07:02:57.256820 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 07:02:57 crc kubenswrapper[4520]: I0130 07:02:57.268392 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 07:02:57 crc kubenswrapper[4520]: I0130 07:02:57.283483 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 07:02:57 crc kubenswrapper[4520]: E0130 07:02:57.283934 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f618127c-58ad-486c-8301-87a0f1621727" containerName="nova-cell0-conductor-conductor" Jan 30 07:02:57 crc kubenswrapper[4520]: I0130 07:02:57.283952 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="f618127c-58ad-486c-8301-87a0f1621727" containerName="nova-cell0-conductor-conductor" Jan 30 07:02:57 crc kubenswrapper[4520]: I0130 07:02:57.284126 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="f618127c-58ad-486c-8301-87a0f1621727" containerName="nova-cell0-conductor-conductor" Jan 30 07:02:57 crc kubenswrapper[4520]: I0130 07:02:57.284785 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 07:02:57 crc kubenswrapper[4520]: I0130 07:02:57.287908 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 07:02:57 crc kubenswrapper[4520]: I0130 07:02:57.288093 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-dtmtd" Jan 30 07:02:57 crc kubenswrapper[4520]: I0130 07:02:57.294489 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 07:02:57 crc kubenswrapper[4520]: I0130 07:02:57.399131 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/390b0ef4-f2a0-46b7-a33f-c287020fdc83-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"390b0ef4-f2a0-46b7-a33f-c287020fdc83\") " pod="openstack/nova-cell0-conductor-0" Jan 30 07:02:57 crc kubenswrapper[4520]: I0130 07:02:57.399541 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/390b0ef4-f2a0-46b7-a33f-c287020fdc83-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"390b0ef4-f2a0-46b7-a33f-c287020fdc83\") " pod="openstack/nova-cell0-conductor-0" Jan 30 07:02:57 crc kubenswrapper[4520]: I0130 07:02:57.399741 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvngf\" (UniqueName: \"kubernetes.io/projected/390b0ef4-f2a0-46b7-a33f-c287020fdc83-kube-api-access-nvngf\") pod \"nova-cell0-conductor-0\" (UID: \"390b0ef4-f2a0-46b7-a33f-c287020fdc83\") " pod="openstack/nova-cell0-conductor-0" Jan 30 07:02:57 crc kubenswrapper[4520]: I0130 07:02:57.500318 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvngf\" (UniqueName: \"kubernetes.io/projected/390b0ef4-f2a0-46b7-a33f-c287020fdc83-kube-api-access-nvngf\") pod \"nova-cell0-conductor-0\" (UID: \"390b0ef4-f2a0-46b7-a33f-c287020fdc83\") " pod="openstack/nova-cell0-conductor-0" Jan 30 07:02:57 crc kubenswrapper[4520]: I0130 07:02:57.500388 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/390b0ef4-f2a0-46b7-a33f-c287020fdc83-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"390b0ef4-f2a0-46b7-a33f-c287020fdc83\") " pod="openstack/nova-cell0-conductor-0" Jan 30 07:02:57 crc kubenswrapper[4520]: I0130 07:02:57.500414 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/390b0ef4-f2a0-46b7-a33f-c287020fdc83-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"390b0ef4-f2a0-46b7-a33f-c287020fdc83\") " pod="openstack/nova-cell0-conductor-0" Jan 30 07:02:57 crc kubenswrapper[4520]: I0130 07:02:57.506496 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/390b0ef4-f2a0-46b7-a33f-c287020fdc83-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"390b0ef4-f2a0-46b7-a33f-c287020fdc83\") " pod="openstack/nova-cell0-conductor-0" Jan 30 07:02:57 crc kubenswrapper[4520]: I0130 07:02:57.506543 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/390b0ef4-f2a0-46b7-a33f-c287020fdc83-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"390b0ef4-f2a0-46b7-a33f-c287020fdc83\") " pod="openstack/nova-cell0-conductor-0" Jan 30 07:02:57 crc kubenswrapper[4520]: I0130 07:02:57.515327 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvngf\" (UniqueName: \"kubernetes.io/projected/390b0ef4-f2a0-46b7-a33f-c287020fdc83-kube-api-access-nvngf\") pod \"nova-cell0-conductor-0\" (UID: \"390b0ef4-f2a0-46b7-a33f-c287020fdc83\") " pod="openstack/nova-cell0-conductor-0" Jan 30 07:02:57 crc kubenswrapper[4520]: I0130 07:02:57.605049 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 07:02:58 crc kubenswrapper[4520]: I0130 07:02:58.035320 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 07:02:58 crc kubenswrapper[4520]: W0130 07:02:58.042545 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod390b0ef4_f2a0_46b7_a33f_c287020fdc83.slice/crio-0a56d7eaf1e584ea7df0aa838dd750b32e5b999bb922de60bbfc5aa4c264e612 WatchSource:0}: Error finding container 0a56d7eaf1e584ea7df0aa838dd750b32e5b999bb922de60bbfc5aa4c264e612: Status 404 returned error can't find the container with id 0a56d7eaf1e584ea7df0aa838dd750b32e5b999bb922de60bbfc5aa4c264e612 Jan 30 07:02:58 crc kubenswrapper[4520]: I0130 07:02:58.696807 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f618127c-58ad-486c-8301-87a0f1621727" path="/var/lib/kubelet/pods/f618127c-58ad-486c-8301-87a0f1621727/volumes" Jan 30 07:02:58 crc kubenswrapper[4520]: I0130 07:02:58.941736 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"390b0ef4-f2a0-46b7-a33f-c287020fdc83","Type":"ContainerStarted","Data":"e802626c87b83dd9b3920bf85e1ad27ac2daed8f7ef77a23f4ce178cfc744e6a"} Jan 30 07:02:58 crc kubenswrapper[4520]: I0130 07:02:58.941984 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"390b0ef4-f2a0-46b7-a33f-c287020fdc83","Type":"ContainerStarted","Data":"0a56d7eaf1e584ea7df0aa838dd750b32e5b999bb922de60bbfc5aa4c264e612"} Jan 30 07:02:58 crc kubenswrapper[4520]: I0130 07:02:58.942078 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 30 07:02:58 crc kubenswrapper[4520]: I0130 07:02:58.963816 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=1.963796061 podStartE2EDuration="1.963796061s" podCreationTimestamp="2026-01-30 07:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:02:58.957033787 +0000 UTC m=+1092.585385978" watchObservedRunningTime="2026-01-30 07:02:58.963796061 +0000 UTC m=+1092.592148241" Jan 30 07:03:07 crc kubenswrapper[4520]: I0130 07:03:07.636179 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.141310 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-dlt85"] Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.142721 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-dlt85" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.150837 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.150980 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.165289 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-dlt85"] Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.220278 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13d598af-4041-4d4e-8594-56d19d1225f5-config-data\") pod \"nova-cell0-cell-mapping-dlt85\" (UID: \"13d598af-4041-4d4e-8594-56d19d1225f5\") " pod="openstack/nova-cell0-cell-mapping-dlt85" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.220339 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13d598af-4041-4d4e-8594-56d19d1225f5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-dlt85\" (UID: \"13d598af-4041-4d4e-8594-56d19d1225f5\") " pod="openstack/nova-cell0-cell-mapping-dlt85" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.220563 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13d598af-4041-4d4e-8594-56d19d1225f5-scripts\") pod \"nova-cell0-cell-mapping-dlt85\" (UID: \"13d598af-4041-4d4e-8594-56d19d1225f5\") " pod="openstack/nova-cell0-cell-mapping-dlt85" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.220646 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j2tn\" (UniqueName: \"kubernetes.io/projected/13d598af-4041-4d4e-8594-56d19d1225f5-kube-api-access-4j2tn\") pod \"nova-cell0-cell-mapping-dlt85\" (UID: \"13d598af-4041-4d4e-8594-56d19d1225f5\") " pod="openstack/nova-cell0-cell-mapping-dlt85" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.321923 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13d598af-4041-4d4e-8594-56d19d1225f5-scripts\") pod \"nova-cell0-cell-mapping-dlt85\" (UID: \"13d598af-4041-4d4e-8594-56d19d1225f5\") " pod="openstack/nova-cell0-cell-mapping-dlt85" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.321990 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j2tn\" (UniqueName: \"kubernetes.io/projected/13d598af-4041-4d4e-8594-56d19d1225f5-kube-api-access-4j2tn\") pod \"nova-cell0-cell-mapping-dlt85\" (UID: \"13d598af-4041-4d4e-8594-56d19d1225f5\") " pod="openstack/nova-cell0-cell-mapping-dlt85" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.322037 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13d598af-4041-4d4e-8594-56d19d1225f5-config-data\") pod \"nova-cell0-cell-mapping-dlt85\" (UID: \"13d598af-4041-4d4e-8594-56d19d1225f5\") " pod="openstack/nova-cell0-cell-mapping-dlt85" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.322057 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13d598af-4041-4d4e-8594-56d19d1225f5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-dlt85\" (UID: \"13d598af-4041-4d4e-8594-56d19d1225f5\") " pod="openstack/nova-cell0-cell-mapping-dlt85" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.331569 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13d598af-4041-4d4e-8594-56d19d1225f5-config-data\") pod \"nova-cell0-cell-mapping-dlt85\" (UID: \"13d598af-4041-4d4e-8594-56d19d1225f5\") " pod="openstack/nova-cell0-cell-mapping-dlt85" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.333056 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13d598af-4041-4d4e-8594-56d19d1225f5-scripts\") pod \"nova-cell0-cell-mapping-dlt85\" (UID: \"13d598af-4041-4d4e-8594-56d19d1225f5\") " pod="openstack/nova-cell0-cell-mapping-dlt85" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.343749 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13d598af-4041-4d4e-8594-56d19d1225f5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-dlt85\" (UID: \"13d598af-4041-4d4e-8594-56d19d1225f5\") " pod="openstack/nova-cell0-cell-mapping-dlt85" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.354553 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j2tn\" (UniqueName: \"kubernetes.io/projected/13d598af-4041-4d4e-8594-56d19d1225f5-kube-api-access-4j2tn\") pod \"nova-cell0-cell-mapping-dlt85\" (UID: \"13d598af-4041-4d4e-8594-56d19d1225f5\") " pod="openstack/nova-cell0-cell-mapping-dlt85" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.409179 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.446504 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.460326 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.463594 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.467282 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-dlt85" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.469591 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.470145 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.556846 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btk7r\" (UniqueName: \"kubernetes.io/projected/7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83-kube-api-access-btk7r\") pod \"nova-metadata-0\" (UID: \"7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83\") " pod="openstack/nova-metadata-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.556972 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsdwd\" (UniqueName: \"kubernetes.io/projected/f62fce94-0031-431f-a8a9-213c4b0b4a2e-kube-api-access-lsdwd\") pod \"nova-api-0\" (UID: \"f62fce94-0031-431f-a8a9-213c4b0b4a2e\") " pod="openstack/nova-api-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.557127 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f62fce94-0031-431f-a8a9-213c4b0b4a2e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f62fce94-0031-431f-a8a9-213c4b0b4a2e\") " pod="openstack/nova-api-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.557163 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83\") " pod="openstack/nova-metadata-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.557308 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83-config-data\") pod \"nova-metadata-0\" (UID: \"7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83\") " pod="openstack/nova-metadata-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.557442 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f62fce94-0031-431f-a8a9-213c4b0b4a2e-config-data\") pod \"nova-api-0\" (UID: \"f62fce94-0031-431f-a8a9-213c4b0b4a2e\") " pod="openstack/nova-api-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.557490 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f62fce94-0031-431f-a8a9-213c4b0b4a2e-logs\") pod \"nova-api-0\" (UID: \"f62fce94-0031-431f-a8a9-213c4b0b4a2e\") " pod="openstack/nova-api-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.557508 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83-logs\") pod \"nova-metadata-0\" (UID: \"7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83\") " pod="openstack/nova-metadata-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.563241 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.590005 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.614746 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.616169 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.618991 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.619129 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.661358 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrs82\" (UniqueName: \"kubernetes.io/projected/4b6d494a-0c95-4c49-ab5e-41ea8ce094ba-kube-api-access-rrs82\") pod \"nova-scheduler-0\" (UID: \"4b6d494a-0c95-4c49-ab5e-41ea8ce094ba\") " pod="openstack/nova-scheduler-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.661395 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b6d494a-0c95-4c49-ab5e-41ea8ce094ba-config-data\") pod \"nova-scheduler-0\" (UID: \"4b6d494a-0c95-4c49-ab5e-41ea8ce094ba\") " pod="openstack/nova-scheduler-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.661454 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btk7r\" (UniqueName: \"kubernetes.io/projected/7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83-kube-api-access-btk7r\") pod \"nova-metadata-0\" (UID: \"7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83\") " pod="openstack/nova-metadata-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.661491 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsdwd\" (UniqueName: \"kubernetes.io/projected/f62fce94-0031-431f-a8a9-213c4b0b4a2e-kube-api-access-lsdwd\") pod \"nova-api-0\" (UID: \"f62fce94-0031-431f-a8a9-213c4b0b4a2e\") " pod="openstack/nova-api-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.661546 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f62fce94-0031-431f-a8a9-213c4b0b4a2e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f62fce94-0031-431f-a8a9-213c4b0b4a2e\") " pod="openstack/nova-api-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.661569 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83\") " pod="openstack/nova-metadata-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.661617 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83-config-data\") pod \"nova-metadata-0\" (UID: \"7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83\") " pod="openstack/nova-metadata-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.661690 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b6d494a-0c95-4c49-ab5e-41ea8ce094ba-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4b6d494a-0c95-4c49-ab5e-41ea8ce094ba\") " pod="openstack/nova-scheduler-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.661717 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f62fce94-0031-431f-a8a9-213c4b0b4a2e-config-data\") pod \"nova-api-0\" (UID: \"f62fce94-0031-431f-a8a9-213c4b0b4a2e\") " pod="openstack/nova-api-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.661752 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f62fce94-0031-431f-a8a9-213c4b0b4a2e-logs\") pod \"nova-api-0\" (UID: \"f62fce94-0031-431f-a8a9-213c4b0b4a2e\") " pod="openstack/nova-api-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.661770 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83-logs\") pod \"nova-metadata-0\" (UID: \"7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83\") " pod="openstack/nova-metadata-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.662151 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83-logs\") pod \"nova-metadata-0\" (UID: \"7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83\") " pod="openstack/nova-metadata-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.670200 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f62fce94-0031-431f-a8a9-213c4b0b4a2e-logs\") pod \"nova-api-0\" (UID: \"f62fce94-0031-431f-a8a9-213c4b0b4a2e\") " pod="openstack/nova-api-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.674276 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f62fce94-0031-431f-a8a9-213c4b0b4a2e-config-data\") pod \"nova-api-0\" (UID: \"f62fce94-0031-431f-a8a9-213c4b0b4a2e\") " pod="openstack/nova-api-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.692480 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsdwd\" (UniqueName: \"kubernetes.io/projected/f62fce94-0031-431f-a8a9-213c4b0b4a2e-kube-api-access-lsdwd\") pod \"nova-api-0\" (UID: \"f62fce94-0031-431f-a8a9-213c4b0b4a2e\") " pod="openstack/nova-api-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.708733 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f62fce94-0031-431f-a8a9-213c4b0b4a2e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f62fce94-0031-431f-a8a9-213c4b0b4a2e\") " pod="openstack/nova-api-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.720086 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btk7r\" (UniqueName: \"kubernetes.io/projected/7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83-kube-api-access-btk7r\") pod \"nova-metadata-0\" (UID: \"7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83\") " pod="openstack/nova-metadata-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.728080 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83\") " pod="openstack/nova-metadata-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.728979 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83-config-data\") pod \"nova-metadata-0\" (UID: \"7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83\") " pod="openstack/nova-metadata-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.753147 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.754303 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.755679 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.761162 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.766996 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b6d494a-0c95-4c49-ab5e-41ea8ce094ba-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4b6d494a-0c95-4c49-ab5e-41ea8ce094ba\") " pod="openstack/nova-scheduler-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.767059 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900d6126-3c05-4fa2-9f32-f444ff2ed311-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"900d6126-3c05-4fa2-9f32-f444ff2ed311\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.767224 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrs82\" (UniqueName: \"kubernetes.io/projected/4b6d494a-0c95-4c49-ab5e-41ea8ce094ba-kube-api-access-rrs82\") pod \"nova-scheduler-0\" (UID: \"4b6d494a-0c95-4c49-ab5e-41ea8ce094ba\") " pod="openstack/nova-scheduler-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.767248 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b6d494a-0c95-4c49-ab5e-41ea8ce094ba-config-data\") pod \"nova-scheduler-0\" (UID: \"4b6d494a-0c95-4c49-ab5e-41ea8ce094ba\") " pod="openstack/nova-scheduler-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.767381 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900d6126-3c05-4fa2-9f32-f444ff2ed311-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"900d6126-3c05-4fa2-9f32-f444ff2ed311\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.767492 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t28ll\" (UniqueName: \"kubernetes.io/projected/900d6126-3c05-4fa2-9f32-f444ff2ed311-kube-api-access-t28ll\") pod \"nova-cell1-novncproxy-0\" (UID: \"900d6126-3c05-4fa2-9f32-f444ff2ed311\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.775918 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b6d494a-0c95-4c49-ab5e-41ea8ce094ba-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4b6d494a-0c95-4c49-ab5e-41ea8ce094ba\") " pod="openstack/nova-scheduler-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.779961 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69784c8cfc-c4j47"] Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.780281 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.781925 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69784c8cfc-c4j47" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.794699 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b6d494a-0c95-4c49-ab5e-41ea8ce094ba-config-data\") pod \"nova-scheduler-0\" (UID: \"4b6d494a-0c95-4c49-ab5e-41ea8ce094ba\") " pod="openstack/nova-scheduler-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.803125 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69784c8cfc-c4j47"] Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.809800 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrs82\" (UniqueName: \"kubernetes.io/projected/4b6d494a-0c95-4c49-ab5e-41ea8ce094ba-kube-api-access-rrs82\") pod \"nova-scheduler-0\" (UID: \"4b6d494a-0c95-4c49-ab5e-41ea8ce094ba\") " pod="openstack/nova-scheduler-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.866963 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.885186 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/70d249d0-c436-449d-a28a-f565dd87be43-dns-swift-storage-0\") pod \"dnsmasq-dns-69784c8cfc-c4j47\" (UID: \"70d249d0-c436-449d-a28a-f565dd87be43\") " pod="openstack/dnsmasq-dns-69784c8cfc-c4j47" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.885227 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znh27\" (UniqueName: \"kubernetes.io/projected/70d249d0-c436-449d-a28a-f565dd87be43-kube-api-access-znh27\") pod \"dnsmasq-dns-69784c8cfc-c4j47\" (UID: \"70d249d0-c436-449d-a28a-f565dd87be43\") " pod="openstack/dnsmasq-dns-69784c8cfc-c4j47" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.885269 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70d249d0-c436-449d-a28a-f565dd87be43-ovsdbserver-nb\") pod \"dnsmasq-dns-69784c8cfc-c4j47\" (UID: \"70d249d0-c436-449d-a28a-f565dd87be43\") " pod="openstack/dnsmasq-dns-69784c8cfc-c4j47" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.885295 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900d6126-3c05-4fa2-9f32-f444ff2ed311-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"900d6126-3c05-4fa2-9f32-f444ff2ed311\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.885338 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70d249d0-c436-449d-a28a-f565dd87be43-dns-svc\") pod \"dnsmasq-dns-69784c8cfc-c4j47\" (UID: \"70d249d0-c436-449d-a28a-f565dd87be43\") " pod="openstack/dnsmasq-dns-69784c8cfc-c4j47" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.885359 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t28ll\" (UniqueName: \"kubernetes.io/projected/900d6126-3c05-4fa2-9f32-f444ff2ed311-kube-api-access-t28ll\") pod \"nova-cell1-novncproxy-0\" (UID: \"900d6126-3c05-4fa2-9f32-f444ff2ed311\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.885392 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70d249d0-c436-449d-a28a-f565dd87be43-ovsdbserver-sb\") pod \"dnsmasq-dns-69784c8cfc-c4j47\" (UID: \"70d249d0-c436-449d-a28a-f565dd87be43\") " pod="openstack/dnsmasq-dns-69784c8cfc-c4j47" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.885426 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900d6126-3c05-4fa2-9f32-f444ff2ed311-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"900d6126-3c05-4fa2-9f32-f444ff2ed311\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.885492 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70d249d0-c436-449d-a28a-f565dd87be43-config\") pod \"dnsmasq-dns-69784c8cfc-c4j47\" (UID: \"70d249d0-c436-449d-a28a-f565dd87be43\") " pod="openstack/dnsmasq-dns-69784c8cfc-c4j47" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.896922 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900d6126-3c05-4fa2-9f32-f444ff2ed311-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"900d6126-3c05-4fa2-9f32-f444ff2ed311\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.911650 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900d6126-3c05-4fa2-9f32-f444ff2ed311-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"900d6126-3c05-4fa2-9f32-f444ff2ed311\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.916032 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t28ll\" (UniqueName: \"kubernetes.io/projected/900d6126-3c05-4fa2-9f32-f444ff2ed311-kube-api-access-t28ll\") pod \"nova-cell1-novncproxy-0\" (UID: \"900d6126-3c05-4fa2-9f32-f444ff2ed311\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.989357 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/70d249d0-c436-449d-a28a-f565dd87be43-dns-swift-storage-0\") pod \"dnsmasq-dns-69784c8cfc-c4j47\" (UID: \"70d249d0-c436-449d-a28a-f565dd87be43\") " pod="openstack/dnsmasq-dns-69784c8cfc-c4j47" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.989677 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znh27\" (UniqueName: \"kubernetes.io/projected/70d249d0-c436-449d-a28a-f565dd87be43-kube-api-access-znh27\") pod \"dnsmasq-dns-69784c8cfc-c4j47\" (UID: \"70d249d0-c436-449d-a28a-f565dd87be43\") " pod="openstack/dnsmasq-dns-69784c8cfc-c4j47" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.989793 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70d249d0-c436-449d-a28a-f565dd87be43-ovsdbserver-nb\") pod \"dnsmasq-dns-69784c8cfc-c4j47\" (UID: \"70d249d0-c436-449d-a28a-f565dd87be43\") " pod="openstack/dnsmasq-dns-69784c8cfc-c4j47" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.989915 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70d249d0-c436-449d-a28a-f565dd87be43-dns-svc\") pod \"dnsmasq-dns-69784c8cfc-c4j47\" (UID: \"70d249d0-c436-449d-a28a-f565dd87be43\") " pod="openstack/dnsmasq-dns-69784c8cfc-c4j47" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.990009 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70d249d0-c436-449d-a28a-f565dd87be43-ovsdbserver-sb\") pod \"dnsmasq-dns-69784c8cfc-c4j47\" (UID: \"70d249d0-c436-449d-a28a-f565dd87be43\") " pod="openstack/dnsmasq-dns-69784c8cfc-c4j47" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.990163 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70d249d0-c436-449d-a28a-f565dd87be43-config\") pod \"dnsmasq-dns-69784c8cfc-c4j47\" (UID: \"70d249d0-c436-449d-a28a-f565dd87be43\") " pod="openstack/dnsmasq-dns-69784c8cfc-c4j47" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.991188 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70d249d0-c436-449d-a28a-f565dd87be43-config\") pod \"dnsmasq-dns-69784c8cfc-c4j47\" (UID: \"70d249d0-c436-449d-a28a-f565dd87be43\") " pod="openstack/dnsmasq-dns-69784c8cfc-c4j47" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.994935 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70d249d0-c436-449d-a28a-f565dd87be43-ovsdbserver-nb\") pod \"dnsmasq-dns-69784c8cfc-c4j47\" (UID: \"70d249d0-c436-449d-a28a-f565dd87be43\") " pod="openstack/dnsmasq-dns-69784c8cfc-c4j47" Jan 30 07:03:08 crc kubenswrapper[4520]: I0130 07:03:08.995100 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70d249d0-c436-449d-a28a-f565dd87be43-dns-svc\") pod \"dnsmasq-dns-69784c8cfc-c4j47\" (UID: \"70d249d0-c436-449d-a28a-f565dd87be43\") " pod="openstack/dnsmasq-dns-69784c8cfc-c4j47" Jan 30 07:03:09 crc kubenswrapper[4520]: I0130 07:03:09.000898 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/70d249d0-c436-449d-a28a-f565dd87be43-dns-swift-storage-0\") pod \"dnsmasq-dns-69784c8cfc-c4j47\" (UID: \"70d249d0-c436-449d-a28a-f565dd87be43\") " pod="openstack/dnsmasq-dns-69784c8cfc-c4j47" Jan 30 07:03:09 crc kubenswrapper[4520]: I0130 07:03:09.012039 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70d249d0-c436-449d-a28a-f565dd87be43-ovsdbserver-sb\") pod \"dnsmasq-dns-69784c8cfc-c4j47\" (UID: \"70d249d0-c436-449d-a28a-f565dd87be43\") " pod="openstack/dnsmasq-dns-69784c8cfc-c4j47" Jan 30 07:03:09 crc kubenswrapper[4520]: I0130 07:03:09.021208 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znh27\" (UniqueName: \"kubernetes.io/projected/70d249d0-c436-449d-a28a-f565dd87be43-kube-api-access-znh27\") pod \"dnsmasq-dns-69784c8cfc-c4j47\" (UID: \"70d249d0-c436-449d-a28a-f565dd87be43\") " pod="openstack/dnsmasq-dns-69784c8cfc-c4j47" Jan 30 07:03:09 crc kubenswrapper[4520]: I0130 07:03:09.032710 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 07:03:09 crc kubenswrapper[4520]: I0130 07:03:09.110527 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 07:03:09 crc kubenswrapper[4520]: I0130 07:03:09.157104 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69784c8cfc-c4j47" Jan 30 07:03:09 crc kubenswrapper[4520]: I0130 07:03:09.473062 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-dlt85"] Jan 30 07:03:09 crc kubenswrapper[4520]: I0130 07:03:09.532159 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 07:03:09 crc kubenswrapper[4520]: I0130 07:03:09.649342 4520 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 07:03:09 crc kubenswrapper[4520]: I0130 07:03:09.732597 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 07:03:09 crc kubenswrapper[4520]: I0130 07:03:09.889489 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 07:03:09 crc kubenswrapper[4520]: W0130 07:03:09.911530 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b6d494a_0c95_4c49_ab5e_41ea8ce094ba.slice/crio-6449ab48c9b459bc39a82bbb86baad190a42b10af9b08f68efd84b98e6e4ba64 WatchSource:0}: Error finding container 6449ab48c9b459bc39a82bbb86baad190a42b10af9b08f68efd84b98e6e4ba64: Status 404 returned error can't find the container with id 6449ab48c9b459bc39a82bbb86baad190a42b10af9b08f68efd84b98e6e4ba64 Jan 30 07:03:10 crc kubenswrapper[4520]: I0130 07:03:10.094370 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f62fce94-0031-431f-a8a9-213c4b0b4a2e","Type":"ContainerStarted","Data":"1e81b27fd80f4f8ca9321b0317652bae7d1a15656ef0c3ea26482c2d9f73993d"} Jan 30 07:03:10 crc kubenswrapper[4520]: I0130 07:03:10.095787 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-dlt85" event={"ID":"13d598af-4041-4d4e-8594-56d19d1225f5","Type":"ContainerStarted","Data":"c3cf98f8b16c1d7fee08a59d751e517faf1688c94105516cae5d5be7dfa204b4"} Jan 30 07:03:10 crc kubenswrapper[4520]: I0130 07:03:10.095840 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-dlt85" event={"ID":"13d598af-4041-4d4e-8594-56d19d1225f5","Type":"ContainerStarted","Data":"ac712d8f06a3aa85575f507b9e928d390598598e1cb30dca635874a9d14aeba8"} Jan 30 07:03:10 crc kubenswrapper[4520]: I0130 07:03:10.102472 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4b6d494a-0c95-4c49-ab5e-41ea8ce094ba","Type":"ContainerStarted","Data":"6449ab48c9b459bc39a82bbb86baad190a42b10af9b08f68efd84b98e6e4ba64"} Jan 30 07:03:10 crc kubenswrapper[4520]: I0130 07:03:10.103778 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83","Type":"ContainerStarted","Data":"40283b214d4f9b0be326d32c3fe35b000b401a068c6046fb71ab275ca567d357"} Jan 30 07:03:10 crc kubenswrapper[4520]: I0130 07:03:10.115453 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-dlt85" podStartSLOduration=2.115438293 podStartE2EDuration="2.115438293s" podCreationTimestamp="2026-01-30 07:03:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:03:10.110803419 +0000 UTC m=+1103.739155600" watchObservedRunningTime="2026-01-30 07:03:10.115438293 +0000 UTC m=+1103.743790474" Jan 30 07:03:10 crc kubenswrapper[4520]: W0130 07:03:10.214500 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod900d6126_3c05_4fa2_9f32_f444ff2ed311.slice/crio-a82f565879785414df410d18ce88519704b43dbeb581b1136b09eaae332ff8c2 WatchSource:0}: Error finding container a82f565879785414df410d18ce88519704b43dbeb581b1136b09eaae332ff8c2: Status 404 returned error can't find the container with id a82f565879785414df410d18ce88519704b43dbeb581b1136b09eaae332ff8c2 Jan 30 07:03:10 crc kubenswrapper[4520]: I0130 07:03:10.215204 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 07:03:10 crc kubenswrapper[4520]: I0130 07:03:10.226463 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69784c8cfc-c4j47"] Jan 30 07:03:10 crc kubenswrapper[4520]: W0130 07:03:10.250233 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70d249d0_c436_449d_a28a_f565dd87be43.slice/crio-7a5ef1c879b0202e73d8c281d51a33d4d797569c6d2ac5c6d733c0ac873611a2 WatchSource:0}: Error finding container 7a5ef1c879b0202e73d8c281d51a33d4d797569c6d2ac5c6d733c0ac873611a2: Status 404 returned error can't find the container with id 7a5ef1c879b0202e73d8c281d51a33d4d797569c6d2ac5c6d733c0ac873611a2 Jan 30 07:03:10 crc kubenswrapper[4520]: I0130 07:03:10.351581 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pkvxs"] Jan 30 07:03:10 crc kubenswrapper[4520]: I0130 07:03:10.353790 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pkvxs" Jan 30 07:03:10 crc kubenswrapper[4520]: I0130 07:03:10.357666 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 30 07:03:10 crc kubenswrapper[4520]: I0130 07:03:10.357936 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 07:03:10 crc kubenswrapper[4520]: I0130 07:03:10.388426 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pkvxs"] Jan 30 07:03:10 crc kubenswrapper[4520]: I0130 07:03:10.438136 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/336ca74f-9c49-494d-a5fd-4b67fa9dc2c7-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-pkvxs\" (UID: \"336ca74f-9c49-494d-a5fd-4b67fa9dc2c7\") " pod="openstack/nova-cell1-conductor-db-sync-pkvxs" Jan 30 07:03:10 crc kubenswrapper[4520]: I0130 07:03:10.438187 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/336ca74f-9c49-494d-a5fd-4b67fa9dc2c7-config-data\") pod \"nova-cell1-conductor-db-sync-pkvxs\" (UID: \"336ca74f-9c49-494d-a5fd-4b67fa9dc2c7\") " pod="openstack/nova-cell1-conductor-db-sync-pkvxs" Jan 30 07:03:10 crc kubenswrapper[4520]: I0130 07:03:10.438283 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwc7r\" (UniqueName: \"kubernetes.io/projected/336ca74f-9c49-494d-a5fd-4b67fa9dc2c7-kube-api-access-mwc7r\") pod \"nova-cell1-conductor-db-sync-pkvxs\" (UID: \"336ca74f-9c49-494d-a5fd-4b67fa9dc2c7\") " pod="openstack/nova-cell1-conductor-db-sync-pkvxs" Jan 30 07:03:10 crc kubenswrapper[4520]: I0130 07:03:10.438343 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/336ca74f-9c49-494d-a5fd-4b67fa9dc2c7-scripts\") pod \"nova-cell1-conductor-db-sync-pkvxs\" (UID: \"336ca74f-9c49-494d-a5fd-4b67fa9dc2c7\") " pod="openstack/nova-cell1-conductor-db-sync-pkvxs" Jan 30 07:03:10 crc kubenswrapper[4520]: I0130 07:03:10.540967 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwc7r\" (UniqueName: \"kubernetes.io/projected/336ca74f-9c49-494d-a5fd-4b67fa9dc2c7-kube-api-access-mwc7r\") pod \"nova-cell1-conductor-db-sync-pkvxs\" (UID: \"336ca74f-9c49-494d-a5fd-4b67fa9dc2c7\") " pod="openstack/nova-cell1-conductor-db-sync-pkvxs" Jan 30 07:03:10 crc kubenswrapper[4520]: I0130 07:03:10.541061 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/336ca74f-9c49-494d-a5fd-4b67fa9dc2c7-scripts\") pod \"nova-cell1-conductor-db-sync-pkvxs\" (UID: \"336ca74f-9c49-494d-a5fd-4b67fa9dc2c7\") " pod="openstack/nova-cell1-conductor-db-sync-pkvxs" Jan 30 07:03:10 crc kubenswrapper[4520]: I0130 07:03:10.541252 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/336ca74f-9c49-494d-a5fd-4b67fa9dc2c7-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-pkvxs\" (UID: \"336ca74f-9c49-494d-a5fd-4b67fa9dc2c7\") " pod="openstack/nova-cell1-conductor-db-sync-pkvxs" Jan 30 07:03:10 crc kubenswrapper[4520]: I0130 07:03:10.541278 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/336ca74f-9c49-494d-a5fd-4b67fa9dc2c7-config-data\") pod \"nova-cell1-conductor-db-sync-pkvxs\" (UID: \"336ca74f-9c49-494d-a5fd-4b67fa9dc2c7\") " pod="openstack/nova-cell1-conductor-db-sync-pkvxs" Jan 30 07:03:10 crc kubenswrapper[4520]: I0130 07:03:10.548147 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/336ca74f-9c49-494d-a5fd-4b67fa9dc2c7-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-pkvxs\" (UID: \"336ca74f-9c49-494d-a5fd-4b67fa9dc2c7\") " pod="openstack/nova-cell1-conductor-db-sync-pkvxs" Jan 30 07:03:10 crc kubenswrapper[4520]: I0130 07:03:10.548152 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/336ca74f-9c49-494d-a5fd-4b67fa9dc2c7-scripts\") pod \"nova-cell1-conductor-db-sync-pkvxs\" (UID: \"336ca74f-9c49-494d-a5fd-4b67fa9dc2c7\") " pod="openstack/nova-cell1-conductor-db-sync-pkvxs" Jan 30 07:03:10 crc kubenswrapper[4520]: I0130 07:03:10.558575 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwc7r\" (UniqueName: \"kubernetes.io/projected/336ca74f-9c49-494d-a5fd-4b67fa9dc2c7-kube-api-access-mwc7r\") pod \"nova-cell1-conductor-db-sync-pkvxs\" (UID: \"336ca74f-9c49-494d-a5fd-4b67fa9dc2c7\") " pod="openstack/nova-cell1-conductor-db-sync-pkvxs" Jan 30 07:03:10 crc kubenswrapper[4520]: I0130 07:03:10.560017 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/336ca74f-9c49-494d-a5fd-4b67fa9dc2c7-config-data\") pod \"nova-cell1-conductor-db-sync-pkvxs\" (UID: \"336ca74f-9c49-494d-a5fd-4b67fa9dc2c7\") " pod="openstack/nova-cell1-conductor-db-sync-pkvxs" Jan 30 07:03:10 crc kubenswrapper[4520]: I0130 07:03:10.720800 4520 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod61a58f46-d0e7-4ca3-b01d-52758e84d242"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod61a58f46-d0e7-4ca3-b01d-52758e84d242] : Timed out while waiting for systemd to remove kubepods-burstable-pod61a58f46_d0e7_4ca3_b01d_52758e84d242.slice" Jan 30 07:03:10 crc kubenswrapper[4520]: E0130 07:03:10.721209 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable pod61a58f46-d0e7-4ca3-b01d-52758e84d242] : unable to destroy cgroup paths for cgroup [kubepods burstable pod61a58f46-d0e7-4ca3-b01d-52758e84d242] : Timed out while waiting for systemd to remove kubepods-burstable-pod61a58f46_d0e7_4ca3_b01d_52758e84d242.slice" pod="openshift-marketplace/redhat-operators-lrqhk" podUID="61a58f46-d0e7-4ca3-b01d-52758e84d242" Jan 30 07:03:10 crc kubenswrapper[4520]: I0130 07:03:10.754854 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pkvxs" Jan 30 07:03:11 crc kubenswrapper[4520]: I0130 07:03:11.118081 4520 generic.go:334] "Generic (PLEG): container finished" podID="70d249d0-c436-449d-a28a-f565dd87be43" containerID="df001040ea12cb9259dc939f5e9bd858877d17e98e326b5ccd59cc37f3b1d824" exitCode=0 Jan 30 07:03:11 crc kubenswrapper[4520]: I0130 07:03:11.118335 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69784c8cfc-c4j47" event={"ID":"70d249d0-c436-449d-a28a-f565dd87be43","Type":"ContainerDied","Data":"df001040ea12cb9259dc939f5e9bd858877d17e98e326b5ccd59cc37f3b1d824"} Jan 30 07:03:11 crc kubenswrapper[4520]: I0130 07:03:11.118533 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69784c8cfc-c4j47" event={"ID":"70d249d0-c436-449d-a28a-f565dd87be43","Type":"ContainerStarted","Data":"7a5ef1c879b0202e73d8c281d51a33d4d797569c6d2ac5c6d733c0ac873611a2"} Jan 30 07:03:11 crc kubenswrapper[4520]: I0130 07:03:11.141788 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"900d6126-3c05-4fa2-9f32-f444ff2ed311","Type":"ContainerStarted","Data":"a82f565879785414df410d18ce88519704b43dbeb581b1136b09eaae332ff8c2"} Jan 30 07:03:11 crc kubenswrapper[4520]: I0130 07:03:11.141894 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lrqhk" Jan 30 07:03:11 crc kubenswrapper[4520]: I0130 07:03:11.234884 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lrqhk"] Jan 30 07:03:11 crc kubenswrapper[4520]: I0130 07:03:11.267464 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lrqhk"] Jan 30 07:03:11 crc kubenswrapper[4520]: I0130 07:03:11.381223 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pkvxs"] Jan 30 07:03:11 crc kubenswrapper[4520]: W0130 07:03:11.421326 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod336ca74f_9c49_494d_a5fd_4b67fa9dc2c7.slice/crio-87e6ba19901371043138668b37448e9baf66f1a2c12bbaf15c7cf84ee74220af WatchSource:0}: Error finding container 87e6ba19901371043138668b37448e9baf66f1a2c12bbaf15c7cf84ee74220af: Status 404 returned error can't find the container with id 87e6ba19901371043138668b37448e9baf66f1a2c12bbaf15c7cf84ee74220af Jan 30 07:03:12 crc kubenswrapper[4520]: I0130 07:03:12.175265 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pkvxs" event={"ID":"336ca74f-9c49-494d-a5fd-4b67fa9dc2c7","Type":"ContainerStarted","Data":"c1e157615aef27a6d2496569f4094023ce4b356a17064d3dafd925d951d2bebc"} Jan 30 07:03:12 crc kubenswrapper[4520]: I0130 07:03:12.175320 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pkvxs" event={"ID":"336ca74f-9c49-494d-a5fd-4b67fa9dc2c7","Type":"ContainerStarted","Data":"87e6ba19901371043138668b37448e9baf66f1a2c12bbaf15c7cf84ee74220af"} Jan 30 07:03:12 crc kubenswrapper[4520]: I0130 07:03:12.181342 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69784c8cfc-c4j47" event={"ID":"70d249d0-c436-449d-a28a-f565dd87be43","Type":"ContainerStarted","Data":"caa092df4d6638f79f968e02dc1cd43aaadd511155dd08e2313c9be640d93e45"} Jan 30 07:03:12 crc kubenswrapper[4520]: I0130 07:03:12.182362 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69784c8cfc-c4j47" Jan 30 07:03:12 crc kubenswrapper[4520]: I0130 07:03:12.225004 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-pkvxs" podStartSLOduration=2.224987258 podStartE2EDuration="2.224987258s" podCreationTimestamp="2026-01-30 07:03:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:03:12.19669178 +0000 UTC m=+1105.825043961" watchObservedRunningTime="2026-01-30 07:03:12.224987258 +0000 UTC m=+1105.853339439" Jan 30 07:03:12 crc kubenswrapper[4520]: I0130 07:03:12.255421 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 07:03:12 crc kubenswrapper[4520]: I0130 07:03:12.274559 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 07:03:12 crc kubenswrapper[4520]: I0130 07:03:12.277086 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69784c8cfc-c4j47" podStartSLOduration=4.277069225 podStartE2EDuration="4.277069225s" podCreationTimestamp="2026-01-30 07:03:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:03:12.230720898 +0000 UTC m=+1105.859073079" watchObservedRunningTime="2026-01-30 07:03:12.277069225 +0000 UTC m=+1105.905421396" Jan 30 07:03:12 crc kubenswrapper[4520]: I0130 07:03:12.699001 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61a58f46-d0e7-4ca3-b01d-52758e84d242" path="/var/lib/kubelet/pods/61a58f46-d0e7-4ca3-b01d-52758e84d242/volumes" Jan 30 07:03:15 crc kubenswrapper[4520]: I0130 07:03:15.212471 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"900d6126-3c05-4fa2-9f32-f444ff2ed311","Type":"ContainerStarted","Data":"5ffb6c3032ea5997072d1939c2ed81b58df86111e3207796b371b2974f1474b7"} Jan 30 07:03:15 crc kubenswrapper[4520]: I0130 07:03:15.212586 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="900d6126-3c05-4fa2-9f32-f444ff2ed311" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://5ffb6c3032ea5997072d1939c2ed81b58df86111e3207796b371b2974f1474b7" gracePeriod=30 Jan 30 07:03:15 crc kubenswrapper[4520]: I0130 07:03:15.214932 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f62fce94-0031-431f-a8a9-213c4b0b4a2e","Type":"ContainerStarted","Data":"2988d5f082ad6a16195cc9607124bb4bec4a6b0f60aec9edbf2a54077fe2a7b3"} Jan 30 07:03:15 crc kubenswrapper[4520]: I0130 07:03:15.214974 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f62fce94-0031-431f-a8a9-213c4b0b4a2e","Type":"ContainerStarted","Data":"965b27730d3871b9ead1be60eac6426cee58e81be590869fca9ca5dff06875a0"} Jan 30 07:03:15 crc kubenswrapper[4520]: I0130 07:03:15.217475 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4b6d494a-0c95-4c49-ab5e-41ea8ce094ba","Type":"ContainerStarted","Data":"140c2644f6424f5407d1d2a9ae130c017e4959f0fa96f42b0a75b0679bc8bcea"} Jan 30 07:03:15 crc kubenswrapper[4520]: I0130 07:03:15.219313 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83","Type":"ContainerStarted","Data":"b9681c244f7bbb657e949b2d3a6ba253ddaaf49ed8b36d0b25882d828a757106"} Jan 30 07:03:15 crc kubenswrapper[4520]: I0130 07:03:15.219341 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83","Type":"ContainerStarted","Data":"99693c6e6476da2807ff4c320b90954d84fd250604b37e8e4e7d7800bef19f5a"} Jan 30 07:03:15 crc kubenswrapper[4520]: I0130 07:03:15.219418 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83" containerName="nova-metadata-log" containerID="cri-o://99693c6e6476da2807ff4c320b90954d84fd250604b37e8e4e7d7800bef19f5a" gracePeriod=30 Jan 30 07:03:15 crc kubenswrapper[4520]: I0130 07:03:15.219650 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83" containerName="nova-metadata-metadata" containerID="cri-o://b9681c244f7bbb657e949b2d3a6ba253ddaaf49ed8b36d0b25882d828a757106" gracePeriod=30 Jan 30 07:03:15 crc kubenswrapper[4520]: I0130 07:03:15.257817 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.804761764 podStartE2EDuration="7.257797241s" podCreationTimestamp="2026-01-30 07:03:08 +0000 UTC" firstStartedPulling="2026-01-30 07:03:10.218313486 +0000 UTC m=+1103.846665667" lastFinishedPulling="2026-01-30 07:03:14.671348964 +0000 UTC m=+1108.299701144" observedRunningTime="2026-01-30 07:03:15.231882314 +0000 UTC m=+1108.860234495" watchObservedRunningTime="2026-01-30 07:03:15.257797241 +0000 UTC m=+1108.886149422" Jan 30 07:03:15 crc kubenswrapper[4520]: I0130 07:03:15.258531 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.327418995 podStartE2EDuration="7.258526972s" podCreationTimestamp="2026-01-30 07:03:08 +0000 UTC" firstStartedPulling="2026-01-30 07:03:09.773287527 +0000 UTC m=+1103.401639708" lastFinishedPulling="2026-01-30 07:03:14.704395513 +0000 UTC m=+1108.332747685" observedRunningTime="2026-01-30 07:03:15.251333647 +0000 UTC m=+1108.879685828" watchObservedRunningTime="2026-01-30 07:03:15.258526972 +0000 UTC m=+1108.886879153" Jan 30 07:03:15 crc kubenswrapper[4520]: I0130 07:03:15.274227 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.250697067 podStartE2EDuration="7.274206746s" podCreationTimestamp="2026-01-30 07:03:08 +0000 UTC" firstStartedPulling="2026-01-30 07:03:09.64776736 +0000 UTC m=+1103.276119541" lastFinishedPulling="2026-01-30 07:03:14.671277048 +0000 UTC m=+1108.299629220" observedRunningTime="2026-01-30 07:03:15.269307222 +0000 UTC m=+1108.897659404" watchObservedRunningTime="2026-01-30 07:03:15.274206746 +0000 UTC m=+1108.902558916" Jan 30 07:03:15 crc kubenswrapper[4520]: I0130 07:03:15.296317 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.54557419 podStartE2EDuration="7.29628582s" podCreationTimestamp="2026-01-30 07:03:08 +0000 UTC" firstStartedPulling="2026-01-30 07:03:09.918771786 +0000 UTC m=+1103.547123967" lastFinishedPulling="2026-01-30 07:03:14.669483426 +0000 UTC m=+1108.297835597" observedRunningTime="2026-01-30 07:03:15.291686481 +0000 UTC m=+1108.920038662" watchObservedRunningTime="2026-01-30 07:03:15.29628582 +0000 UTC m=+1108.924638000" Jan 30 07:03:15 crc kubenswrapper[4520]: I0130 07:03:15.450484 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 07:03:16 crc kubenswrapper[4520]: I0130 07:03:16.231307 4520 generic.go:334] "Generic (PLEG): container finished" podID="7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83" containerID="99693c6e6476da2807ff4c320b90954d84fd250604b37e8e4e7d7800bef19f5a" exitCode=143 Jan 30 07:03:16 crc kubenswrapper[4520]: I0130 07:03:16.231392 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83","Type":"ContainerDied","Data":"99693c6e6476da2807ff4c320b90954d84fd250604b37e8e4e7d7800bef19f5a"} Jan 30 07:03:17 crc kubenswrapper[4520]: I0130 07:03:17.243071 4520 generic.go:334] "Generic (PLEG): container finished" podID="336ca74f-9c49-494d-a5fd-4b67fa9dc2c7" containerID="c1e157615aef27a6d2496569f4094023ce4b356a17064d3dafd925d951d2bebc" exitCode=0 Jan 30 07:03:17 crc kubenswrapper[4520]: I0130 07:03:17.243939 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pkvxs" event={"ID":"336ca74f-9c49-494d-a5fd-4b67fa9dc2c7","Type":"ContainerDied","Data":"c1e157615aef27a6d2496569f4094023ce4b356a17064d3dafd925d951d2bebc"} Jan 30 07:03:18 crc kubenswrapper[4520]: I0130 07:03:18.660643 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pkvxs" Jan 30 07:03:18 crc kubenswrapper[4520]: I0130 07:03:18.686460 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/336ca74f-9c49-494d-a5fd-4b67fa9dc2c7-scripts\") pod \"336ca74f-9c49-494d-a5fd-4b67fa9dc2c7\" (UID: \"336ca74f-9c49-494d-a5fd-4b67fa9dc2c7\") " Jan 30 07:03:18 crc kubenswrapper[4520]: I0130 07:03:18.686533 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwc7r\" (UniqueName: \"kubernetes.io/projected/336ca74f-9c49-494d-a5fd-4b67fa9dc2c7-kube-api-access-mwc7r\") pod \"336ca74f-9c49-494d-a5fd-4b67fa9dc2c7\" (UID: \"336ca74f-9c49-494d-a5fd-4b67fa9dc2c7\") " Jan 30 07:03:18 crc kubenswrapper[4520]: I0130 07:03:18.686743 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/336ca74f-9c49-494d-a5fd-4b67fa9dc2c7-combined-ca-bundle\") pod \"336ca74f-9c49-494d-a5fd-4b67fa9dc2c7\" (UID: \"336ca74f-9c49-494d-a5fd-4b67fa9dc2c7\") " Jan 30 07:03:18 crc kubenswrapper[4520]: I0130 07:03:18.686852 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/336ca74f-9c49-494d-a5fd-4b67fa9dc2c7-config-data\") pod \"336ca74f-9c49-494d-a5fd-4b67fa9dc2c7\" (UID: \"336ca74f-9c49-494d-a5fd-4b67fa9dc2c7\") " Jan 30 07:03:18 crc kubenswrapper[4520]: I0130 07:03:18.696432 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/336ca74f-9c49-494d-a5fd-4b67fa9dc2c7-kube-api-access-mwc7r" (OuterVolumeSpecName: "kube-api-access-mwc7r") pod "336ca74f-9c49-494d-a5fd-4b67fa9dc2c7" (UID: "336ca74f-9c49-494d-a5fd-4b67fa9dc2c7"). InnerVolumeSpecName "kube-api-access-mwc7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:03:18 crc kubenswrapper[4520]: I0130 07:03:18.718742 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/336ca74f-9c49-494d-a5fd-4b67fa9dc2c7-scripts" (OuterVolumeSpecName: "scripts") pod "336ca74f-9c49-494d-a5fd-4b67fa9dc2c7" (UID: "336ca74f-9c49-494d-a5fd-4b67fa9dc2c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:03:18 crc kubenswrapper[4520]: I0130 07:03:18.722339 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/336ca74f-9c49-494d-a5fd-4b67fa9dc2c7-config-data" (OuterVolumeSpecName: "config-data") pod "336ca74f-9c49-494d-a5fd-4b67fa9dc2c7" (UID: "336ca74f-9c49-494d-a5fd-4b67fa9dc2c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:03:18 crc kubenswrapper[4520]: I0130 07:03:18.722706 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/336ca74f-9c49-494d-a5fd-4b67fa9dc2c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "336ca74f-9c49-494d-a5fd-4b67fa9dc2c7" (UID: "336ca74f-9c49-494d-a5fd-4b67fa9dc2c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:03:18 crc kubenswrapper[4520]: I0130 07:03:18.781881 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 07:03:18 crc kubenswrapper[4520]: I0130 07:03:18.781962 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 07:03:18 crc kubenswrapper[4520]: I0130 07:03:18.791997 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/336ca74f-9c49-494d-a5fd-4b67fa9dc2c7-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:18 crc kubenswrapper[4520]: I0130 07:03:18.792051 4520 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/336ca74f-9c49-494d-a5fd-4b67fa9dc2c7-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:18 crc kubenswrapper[4520]: I0130 07:03:18.792067 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwc7r\" (UniqueName: \"kubernetes.io/projected/336ca74f-9c49-494d-a5fd-4b67fa9dc2c7-kube-api-access-mwc7r\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:18 crc kubenswrapper[4520]: I0130 07:03:18.792079 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/336ca74f-9c49-494d-a5fd-4b67fa9dc2c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:18 crc kubenswrapper[4520]: I0130 07:03:18.870475 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 07:03:18 crc kubenswrapper[4520]: I0130 07:03:18.870585 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 07:03:19 crc kubenswrapper[4520]: I0130 07:03:19.034703 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 07:03:19 crc kubenswrapper[4520]: I0130 07:03:19.034795 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 07:03:19 crc kubenswrapper[4520]: I0130 07:03:19.067776 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 07:03:19 crc kubenswrapper[4520]: I0130 07:03:19.111280 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 07:03:19 crc kubenswrapper[4520]: I0130 07:03:19.159629 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-69784c8cfc-c4j47" Jan 30 07:03:19 crc kubenswrapper[4520]: I0130 07:03:19.288498 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65948bc6c-vwm6m"] Jan 30 07:03:19 crc kubenswrapper[4520]: I0130 07:03:19.288793 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-65948bc6c-vwm6m" podUID="2c3b80d9-dfeb-4120-a523-4f4ceea700c8" containerName="dnsmasq-dns" containerID="cri-o://2baa15b0138f4a0324144dcf203015759eee61a7163e3c2fbf40f115e3f0cf58" gracePeriod=10 Jan 30 07:03:19 crc kubenswrapper[4520]: I0130 07:03:19.390904 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pkvxs" event={"ID":"336ca74f-9c49-494d-a5fd-4b67fa9dc2c7","Type":"ContainerDied","Data":"87e6ba19901371043138668b37448e9baf66f1a2c12bbaf15c7cf84ee74220af"} Jan 30 07:03:19 crc kubenswrapper[4520]: I0130 07:03:19.390950 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87e6ba19901371043138668b37448e9baf66f1a2c12bbaf15c7cf84ee74220af" Jan 30 07:03:19 crc kubenswrapper[4520]: I0130 07:03:19.391039 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pkvxs" Jan 30 07:03:19 crc kubenswrapper[4520]: I0130 07:03:19.420824 4520 generic.go:334] "Generic (PLEG): container finished" podID="13d598af-4041-4d4e-8594-56d19d1225f5" containerID="c3cf98f8b16c1d7fee08a59d751e517faf1688c94105516cae5d5be7dfa204b4" exitCode=0 Jan 30 07:03:19 crc kubenswrapper[4520]: I0130 07:03:19.422105 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-dlt85" event={"ID":"13d598af-4041-4d4e-8594-56d19d1225f5","Type":"ContainerDied","Data":"c3cf98f8b16c1d7fee08a59d751e517faf1688c94105516cae5d5be7dfa204b4"} Jan 30 07:03:19 crc kubenswrapper[4520]: I0130 07:03:19.503600 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 07:03:19 crc kubenswrapper[4520]: E0130 07:03:19.504104 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="336ca74f-9c49-494d-a5fd-4b67fa9dc2c7" containerName="nova-cell1-conductor-db-sync" Jan 30 07:03:19 crc kubenswrapper[4520]: I0130 07:03:19.504124 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="336ca74f-9c49-494d-a5fd-4b67fa9dc2c7" containerName="nova-cell1-conductor-db-sync" Jan 30 07:03:19 crc kubenswrapper[4520]: I0130 07:03:19.504372 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="336ca74f-9c49-494d-a5fd-4b67fa9dc2c7" containerName="nova-cell1-conductor-db-sync" Jan 30 07:03:19 crc kubenswrapper[4520]: I0130 07:03:19.505129 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 07:03:19 crc kubenswrapper[4520]: I0130 07:03:19.510932 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 07:03:19 crc kubenswrapper[4520]: I0130 07:03:19.524869 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 07:03:19 crc kubenswrapper[4520]: I0130 07:03:19.564941 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b08b944e-6f62-4ead-8331-8de6e60bd829-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"b08b944e-6f62-4ead-8331-8de6e60bd829\") " pod="openstack/nova-cell1-conductor-0" Jan 30 07:03:19 crc kubenswrapper[4520]: I0130 07:03:19.565228 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp9z8\" (UniqueName: \"kubernetes.io/projected/b08b944e-6f62-4ead-8331-8de6e60bd829-kube-api-access-hp9z8\") pod \"nova-cell1-conductor-0\" (UID: \"b08b944e-6f62-4ead-8331-8de6e60bd829\") " pod="openstack/nova-cell1-conductor-0" Jan 30 07:03:19 crc kubenswrapper[4520]: I0130 07:03:19.565331 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b08b944e-6f62-4ead-8331-8de6e60bd829-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"b08b944e-6f62-4ead-8331-8de6e60bd829\") " pod="openstack/nova-cell1-conductor-0" Jan 30 07:03:19 crc kubenswrapper[4520]: I0130 07:03:19.667215 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b08b944e-6f62-4ead-8331-8de6e60bd829-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"b08b944e-6f62-4ead-8331-8de6e60bd829\") " pod="openstack/nova-cell1-conductor-0" Jan 30 07:03:19 crc kubenswrapper[4520]: I0130 07:03:19.667278 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp9z8\" (UniqueName: \"kubernetes.io/projected/b08b944e-6f62-4ead-8331-8de6e60bd829-kube-api-access-hp9z8\") pod \"nova-cell1-conductor-0\" (UID: \"b08b944e-6f62-4ead-8331-8de6e60bd829\") " pod="openstack/nova-cell1-conductor-0" Jan 30 07:03:19 crc kubenswrapper[4520]: I0130 07:03:19.667385 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b08b944e-6f62-4ead-8331-8de6e60bd829-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"b08b944e-6f62-4ead-8331-8de6e60bd829\") " pod="openstack/nova-cell1-conductor-0" Jan 30 07:03:19 crc kubenswrapper[4520]: I0130 07:03:19.671005 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b08b944e-6f62-4ead-8331-8de6e60bd829-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"b08b944e-6f62-4ead-8331-8de6e60bd829\") " pod="openstack/nova-cell1-conductor-0" Jan 30 07:03:19 crc kubenswrapper[4520]: I0130 07:03:19.671472 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b08b944e-6f62-4ead-8331-8de6e60bd829-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"b08b944e-6f62-4ead-8331-8de6e60bd829\") " pod="openstack/nova-cell1-conductor-0" Jan 30 07:03:19 crc kubenswrapper[4520]: I0130 07:03:19.679476 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 07:03:19 crc kubenswrapper[4520]: I0130 07:03:19.702799 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp9z8\" (UniqueName: \"kubernetes.io/projected/b08b944e-6f62-4ead-8331-8de6e60bd829-kube-api-access-hp9z8\") pod \"nova-cell1-conductor-0\" (UID: \"b08b944e-6f62-4ead-8331-8de6e60bd829\") " pod="openstack/nova-cell1-conductor-0" Jan 30 07:03:19 crc kubenswrapper[4520]: I0130 07:03:19.842061 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 07:03:19 crc kubenswrapper[4520]: I0130 07:03:19.867630 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f62fce94-0031-431f-a8a9-213c4b0b4a2e" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.204:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:03:19 crc kubenswrapper[4520]: I0130 07:03:19.867720 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f62fce94-0031-431f-a8a9-213c4b0b4a2e" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.204:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:03:20 crc kubenswrapper[4520]: I0130 07:03:20.232699 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65948bc6c-vwm6m" Jan 30 07:03:20 crc kubenswrapper[4520]: I0130 07:03:20.383039 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c3b80d9-dfeb-4120-a523-4f4ceea700c8-config\") pod \"2c3b80d9-dfeb-4120-a523-4f4ceea700c8\" (UID: \"2c3b80d9-dfeb-4120-a523-4f4ceea700c8\") " Jan 30 07:03:20 crc kubenswrapper[4520]: I0130 07:03:20.383140 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c3b80d9-dfeb-4120-a523-4f4ceea700c8-ovsdbserver-sb\") pod \"2c3b80d9-dfeb-4120-a523-4f4ceea700c8\" (UID: \"2c3b80d9-dfeb-4120-a523-4f4ceea700c8\") " Jan 30 07:03:20 crc kubenswrapper[4520]: I0130 07:03:20.383327 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c3b80d9-dfeb-4120-a523-4f4ceea700c8-dns-swift-storage-0\") pod \"2c3b80d9-dfeb-4120-a523-4f4ceea700c8\" (UID: \"2c3b80d9-dfeb-4120-a523-4f4ceea700c8\") " Jan 30 07:03:20 crc kubenswrapper[4520]: I0130 07:03:20.383446 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c3b80d9-dfeb-4120-a523-4f4ceea700c8-ovsdbserver-nb\") pod \"2c3b80d9-dfeb-4120-a523-4f4ceea700c8\" (UID: \"2c3b80d9-dfeb-4120-a523-4f4ceea700c8\") " Jan 30 07:03:20 crc kubenswrapper[4520]: I0130 07:03:20.383474 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zv4g7\" (UniqueName: \"kubernetes.io/projected/2c3b80d9-dfeb-4120-a523-4f4ceea700c8-kube-api-access-zv4g7\") pod \"2c3b80d9-dfeb-4120-a523-4f4ceea700c8\" (UID: \"2c3b80d9-dfeb-4120-a523-4f4ceea700c8\") " Jan 30 07:03:20 crc kubenswrapper[4520]: I0130 07:03:20.383591 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c3b80d9-dfeb-4120-a523-4f4ceea700c8-dns-svc\") pod \"2c3b80d9-dfeb-4120-a523-4f4ceea700c8\" (UID: \"2c3b80d9-dfeb-4120-a523-4f4ceea700c8\") " Jan 30 07:03:20 crc kubenswrapper[4520]: I0130 07:03:20.404702 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c3b80d9-dfeb-4120-a523-4f4ceea700c8-kube-api-access-zv4g7" (OuterVolumeSpecName: "kube-api-access-zv4g7") pod "2c3b80d9-dfeb-4120-a523-4f4ceea700c8" (UID: "2c3b80d9-dfeb-4120-a523-4f4ceea700c8"). InnerVolumeSpecName "kube-api-access-zv4g7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:03:20 crc kubenswrapper[4520]: I0130 07:03:20.445167 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c3b80d9-dfeb-4120-a523-4f4ceea700c8-config" (OuterVolumeSpecName: "config") pod "2c3b80d9-dfeb-4120-a523-4f4ceea700c8" (UID: "2c3b80d9-dfeb-4120-a523-4f4ceea700c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:03:20 crc kubenswrapper[4520]: I0130 07:03:20.446438 4520 generic.go:334] "Generic (PLEG): container finished" podID="2c3b80d9-dfeb-4120-a523-4f4ceea700c8" containerID="2baa15b0138f4a0324144dcf203015759eee61a7163e3c2fbf40f115e3f0cf58" exitCode=0 Jan 30 07:03:20 crc kubenswrapper[4520]: I0130 07:03:20.446671 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65948bc6c-vwm6m" event={"ID":"2c3b80d9-dfeb-4120-a523-4f4ceea700c8","Type":"ContainerDied","Data":"2baa15b0138f4a0324144dcf203015759eee61a7163e3c2fbf40f115e3f0cf58"} Jan 30 07:03:20 crc kubenswrapper[4520]: I0130 07:03:20.446739 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65948bc6c-vwm6m" event={"ID":"2c3b80d9-dfeb-4120-a523-4f4ceea700c8","Type":"ContainerDied","Data":"66ba0aa88e99f1b221939b7be8ecb45406d776207aeba9bddb50058900bb8875"} Jan 30 07:03:20 crc kubenswrapper[4520]: I0130 07:03:20.446772 4520 scope.go:117] "RemoveContainer" containerID="2baa15b0138f4a0324144dcf203015759eee61a7163e3c2fbf40f115e3f0cf58" Jan 30 07:03:20 crc kubenswrapper[4520]: I0130 07:03:20.447109 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65948bc6c-vwm6m" Jan 30 07:03:20 crc kubenswrapper[4520]: I0130 07:03:20.463812 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c3b80d9-dfeb-4120-a523-4f4ceea700c8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2c3b80d9-dfeb-4120-a523-4f4ceea700c8" (UID: "2c3b80d9-dfeb-4120-a523-4f4ceea700c8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:03:20 crc kubenswrapper[4520]: I0130 07:03:20.472163 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c3b80d9-dfeb-4120-a523-4f4ceea700c8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2c3b80d9-dfeb-4120-a523-4f4ceea700c8" (UID: "2c3b80d9-dfeb-4120-a523-4f4ceea700c8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:03:20 crc kubenswrapper[4520]: I0130 07:03:20.479617 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c3b80d9-dfeb-4120-a523-4f4ceea700c8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2c3b80d9-dfeb-4120-a523-4f4ceea700c8" (UID: "2c3b80d9-dfeb-4120-a523-4f4ceea700c8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:03:20 crc kubenswrapper[4520]: I0130 07:03:20.487332 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c3b80d9-dfeb-4120-a523-4f4ceea700c8-config\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:20 crc kubenswrapper[4520]: I0130 07:03:20.487549 4520 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c3b80d9-dfeb-4120-a523-4f4ceea700c8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:20 crc kubenswrapper[4520]: I0130 07:03:20.487561 4520 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c3b80d9-dfeb-4120-a523-4f4ceea700c8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:20 crc kubenswrapper[4520]: I0130 07:03:20.487574 4520 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c3b80d9-dfeb-4120-a523-4f4ceea700c8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:20 crc kubenswrapper[4520]: I0130 07:03:20.487584 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zv4g7\" (UniqueName: \"kubernetes.io/projected/2c3b80d9-dfeb-4120-a523-4f4ceea700c8-kube-api-access-zv4g7\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:20 crc kubenswrapper[4520]: I0130 07:03:20.488456 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c3b80d9-dfeb-4120-a523-4f4ceea700c8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2c3b80d9-dfeb-4120-a523-4f4ceea700c8" (UID: "2c3b80d9-dfeb-4120-a523-4f4ceea700c8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:03:20 crc kubenswrapper[4520]: I0130 07:03:20.506470 4520 scope.go:117] "RemoveContainer" containerID="fd8df293993504736656150628e4c21b6223e6d43580a46f13b22035758f47e9" Jan 30 07:03:20 crc kubenswrapper[4520]: I0130 07:03:20.541299 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 07:03:20 crc kubenswrapper[4520]: I0130 07:03:20.550014 4520 scope.go:117] "RemoveContainer" containerID="2baa15b0138f4a0324144dcf203015759eee61a7163e3c2fbf40f115e3f0cf58" Jan 30 07:03:20 crc kubenswrapper[4520]: E0130 07:03:20.557654 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2baa15b0138f4a0324144dcf203015759eee61a7163e3c2fbf40f115e3f0cf58\": container with ID starting with 2baa15b0138f4a0324144dcf203015759eee61a7163e3c2fbf40f115e3f0cf58 not found: ID does not exist" containerID="2baa15b0138f4a0324144dcf203015759eee61a7163e3c2fbf40f115e3f0cf58" Jan 30 07:03:20 crc kubenswrapper[4520]: I0130 07:03:20.557703 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2baa15b0138f4a0324144dcf203015759eee61a7163e3c2fbf40f115e3f0cf58"} err="failed to get container status \"2baa15b0138f4a0324144dcf203015759eee61a7163e3c2fbf40f115e3f0cf58\": rpc error: code = NotFound desc = could not find container \"2baa15b0138f4a0324144dcf203015759eee61a7163e3c2fbf40f115e3f0cf58\": container with ID starting with 2baa15b0138f4a0324144dcf203015759eee61a7163e3c2fbf40f115e3f0cf58 not found: ID does not exist" Jan 30 07:03:20 crc kubenswrapper[4520]: I0130 07:03:20.557734 4520 scope.go:117] "RemoveContainer" containerID="fd8df293993504736656150628e4c21b6223e6d43580a46f13b22035758f47e9" Jan 30 07:03:20 crc kubenswrapper[4520]: E0130 07:03:20.559198 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd8df293993504736656150628e4c21b6223e6d43580a46f13b22035758f47e9\": container with ID starting with fd8df293993504736656150628e4c21b6223e6d43580a46f13b22035758f47e9 not found: ID does not exist" containerID="fd8df293993504736656150628e4c21b6223e6d43580a46f13b22035758f47e9" Jan 30 07:03:20 crc kubenswrapper[4520]: I0130 07:03:20.559244 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd8df293993504736656150628e4c21b6223e6d43580a46f13b22035758f47e9"} err="failed to get container status \"fd8df293993504736656150628e4c21b6223e6d43580a46f13b22035758f47e9\": rpc error: code = NotFound desc = could not find container \"fd8df293993504736656150628e4c21b6223e6d43580a46f13b22035758f47e9\": container with ID starting with fd8df293993504736656150628e4c21b6223e6d43580a46f13b22035758f47e9 not found: ID does not exist" Jan 30 07:03:20 crc kubenswrapper[4520]: I0130 07:03:20.589541 4520 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c3b80d9-dfeb-4120-a523-4f4ceea700c8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:20 crc kubenswrapper[4520]: I0130 07:03:20.869357 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-dlt85" Jan 30 07:03:20 crc kubenswrapper[4520]: I0130 07:03:20.881737 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65948bc6c-vwm6m"] Jan 30 07:03:20 crc kubenswrapper[4520]: I0130 07:03:20.892156 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65948bc6c-vwm6m"] Jan 30 07:03:20 crc kubenswrapper[4520]: I0130 07:03:20.998534 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13d598af-4041-4d4e-8594-56d19d1225f5-combined-ca-bundle\") pod \"13d598af-4041-4d4e-8594-56d19d1225f5\" (UID: \"13d598af-4041-4d4e-8594-56d19d1225f5\") " Jan 30 07:03:20 crc kubenswrapper[4520]: I0130 07:03:20.998672 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4j2tn\" (UniqueName: \"kubernetes.io/projected/13d598af-4041-4d4e-8594-56d19d1225f5-kube-api-access-4j2tn\") pod \"13d598af-4041-4d4e-8594-56d19d1225f5\" (UID: \"13d598af-4041-4d4e-8594-56d19d1225f5\") " Jan 30 07:03:20 crc kubenswrapper[4520]: I0130 07:03:20.999036 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13d598af-4041-4d4e-8594-56d19d1225f5-scripts\") pod \"13d598af-4041-4d4e-8594-56d19d1225f5\" (UID: \"13d598af-4041-4d4e-8594-56d19d1225f5\") " Jan 30 07:03:20 crc kubenswrapper[4520]: I0130 07:03:20.999087 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13d598af-4041-4d4e-8594-56d19d1225f5-config-data\") pod \"13d598af-4041-4d4e-8594-56d19d1225f5\" (UID: \"13d598af-4041-4d4e-8594-56d19d1225f5\") " Jan 30 07:03:21 crc kubenswrapper[4520]: I0130 07:03:21.015747 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13d598af-4041-4d4e-8594-56d19d1225f5-scripts" (OuterVolumeSpecName: "scripts") pod "13d598af-4041-4d4e-8594-56d19d1225f5" (UID: "13d598af-4041-4d4e-8594-56d19d1225f5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:03:21 crc kubenswrapper[4520]: I0130 07:03:21.025692 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13d598af-4041-4d4e-8594-56d19d1225f5-kube-api-access-4j2tn" (OuterVolumeSpecName: "kube-api-access-4j2tn") pod "13d598af-4041-4d4e-8594-56d19d1225f5" (UID: "13d598af-4041-4d4e-8594-56d19d1225f5"). InnerVolumeSpecName "kube-api-access-4j2tn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:03:21 crc kubenswrapper[4520]: I0130 07:03:21.040300 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13d598af-4041-4d4e-8594-56d19d1225f5-config-data" (OuterVolumeSpecName: "config-data") pod "13d598af-4041-4d4e-8594-56d19d1225f5" (UID: "13d598af-4041-4d4e-8594-56d19d1225f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:03:21 crc kubenswrapper[4520]: I0130 07:03:21.057299 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13d598af-4041-4d4e-8594-56d19d1225f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "13d598af-4041-4d4e-8594-56d19d1225f5" (UID: "13d598af-4041-4d4e-8594-56d19d1225f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:03:21 crc kubenswrapper[4520]: I0130 07:03:21.101587 4520 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13d598af-4041-4d4e-8594-56d19d1225f5-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:21 crc kubenswrapper[4520]: I0130 07:03:21.101611 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13d598af-4041-4d4e-8594-56d19d1225f5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:21 crc kubenswrapper[4520]: I0130 07:03:21.101621 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13d598af-4041-4d4e-8594-56d19d1225f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:21 crc kubenswrapper[4520]: I0130 07:03:21.101631 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4j2tn\" (UniqueName: \"kubernetes.io/projected/13d598af-4041-4d4e-8594-56d19d1225f5-kube-api-access-4j2tn\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:21 crc kubenswrapper[4520]: I0130 07:03:21.219819 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 07:03:21 crc kubenswrapper[4520]: I0130 07:03:21.220066 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="808069be-f7b4-4e1c-86d2-585915e49a1f" containerName="kube-state-metrics" containerID="cri-o://72c4b506da37fc371c227be33163d16699d0b85599ffc83a6f4e642a05b2fe48" gracePeriod=30 Jan 30 07:03:21 crc kubenswrapper[4520]: I0130 07:03:21.459765 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-dlt85" event={"ID":"13d598af-4041-4d4e-8594-56d19d1225f5","Type":"ContainerDied","Data":"ac712d8f06a3aa85575f507b9e928d390598598e1cb30dca635874a9d14aeba8"} Jan 30 07:03:21 crc kubenswrapper[4520]: I0130 07:03:21.460109 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac712d8f06a3aa85575f507b9e928d390598598e1cb30dca635874a9d14aeba8" Jan 30 07:03:21 crc kubenswrapper[4520]: I0130 07:03:21.460058 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-dlt85" Jan 30 07:03:21 crc kubenswrapper[4520]: I0130 07:03:21.461624 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"b08b944e-6f62-4ead-8331-8de6e60bd829","Type":"ContainerStarted","Data":"0cdea858b2db811b6ac628e70d186bac12ec36e34639085971936a7cc04eac43"} Jan 30 07:03:21 crc kubenswrapper[4520]: I0130 07:03:21.461692 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"b08b944e-6f62-4ead-8331-8de6e60bd829","Type":"ContainerStarted","Data":"8c8305bd8dc7db0f5560aebe6966b18c6dfc45b99f1d2c957c398e2c23c2e07e"} Jan 30 07:03:21 crc kubenswrapper[4520]: I0130 07:03:21.463104 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 30 07:03:21 crc kubenswrapper[4520]: I0130 07:03:21.468115 4520 generic.go:334] "Generic (PLEG): container finished" podID="808069be-f7b4-4e1c-86d2-585915e49a1f" containerID="72c4b506da37fc371c227be33163d16699d0b85599ffc83a6f4e642a05b2fe48" exitCode=2 Jan 30 07:03:21 crc kubenswrapper[4520]: I0130 07:03:21.468183 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"808069be-f7b4-4e1c-86d2-585915e49a1f","Type":"ContainerDied","Data":"72c4b506da37fc371c227be33163d16699d0b85599ffc83a6f4e642a05b2fe48"} Jan 30 07:03:21 crc kubenswrapper[4520]: I0130 07:03:21.492788 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.492774501 podStartE2EDuration="2.492774501s" podCreationTimestamp="2026-01-30 07:03:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:03:21.490359239 +0000 UTC m=+1115.118711419" watchObservedRunningTime="2026-01-30 07:03:21.492774501 +0000 UTC m=+1115.121126681" Jan 30 07:03:21 crc kubenswrapper[4520]: I0130 07:03:21.636496 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 07:03:21 crc kubenswrapper[4520]: I0130 07:03:21.637721 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f62fce94-0031-431f-a8a9-213c4b0b4a2e" containerName="nova-api-log" containerID="cri-o://965b27730d3871b9ead1be60eac6426cee58e81be590869fca9ca5dff06875a0" gracePeriod=30 Jan 30 07:03:21 crc kubenswrapper[4520]: I0130 07:03:21.637951 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f62fce94-0031-431f-a8a9-213c4b0b4a2e" containerName="nova-api-api" containerID="cri-o://2988d5f082ad6a16195cc9607124bb4bec4a6b0f60aec9edbf2a54077fe2a7b3" gracePeriod=30 Jan 30 07:03:21 crc kubenswrapper[4520]: I0130 07:03:21.740731 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 07:03:21 crc kubenswrapper[4520]: I0130 07:03:21.741281 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 07:03:21 crc kubenswrapper[4520]: I0130 07:03:21.741688 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="4b6d494a-0c95-4c49-ab5e-41ea8ce094ba" containerName="nova-scheduler-scheduler" containerID="cri-o://140c2644f6424f5407d1d2a9ae130c017e4959f0fa96f42b0a75b0679bc8bcea" gracePeriod=30 Jan 30 07:03:21 crc kubenswrapper[4520]: I0130 07:03:21.948810 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdjmn\" (UniqueName: \"kubernetes.io/projected/808069be-f7b4-4e1c-86d2-585915e49a1f-kube-api-access-xdjmn\") pod \"808069be-f7b4-4e1c-86d2-585915e49a1f\" (UID: \"808069be-f7b4-4e1c-86d2-585915e49a1f\") " Jan 30 07:03:21 crc kubenswrapper[4520]: I0130 07:03:21.963837 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/808069be-f7b4-4e1c-86d2-585915e49a1f-kube-api-access-xdjmn" (OuterVolumeSpecName: "kube-api-access-xdjmn") pod "808069be-f7b4-4e1c-86d2-585915e49a1f" (UID: "808069be-f7b4-4e1c-86d2-585915e49a1f"). InnerVolumeSpecName "kube-api-access-xdjmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.052284 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdjmn\" (UniqueName: \"kubernetes.io/projected/808069be-f7b4-4e1c-86d2-585915e49a1f-kube-api-access-xdjmn\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.482905 4520 generic.go:334] "Generic (PLEG): container finished" podID="4b6d494a-0c95-4c49-ab5e-41ea8ce094ba" containerID="140c2644f6424f5407d1d2a9ae130c017e4959f0fa96f42b0a75b0679bc8bcea" exitCode=0 Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.482972 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4b6d494a-0c95-4c49-ab5e-41ea8ce094ba","Type":"ContainerDied","Data":"140c2644f6424f5407d1d2a9ae130c017e4959f0fa96f42b0a75b0679bc8bcea"} Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.485157 4520 generic.go:334] "Generic (PLEG): container finished" podID="f62fce94-0031-431f-a8a9-213c4b0b4a2e" containerID="965b27730d3871b9ead1be60eac6426cee58e81be590869fca9ca5dff06875a0" exitCode=143 Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.485210 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f62fce94-0031-431f-a8a9-213c4b0b4a2e","Type":"ContainerDied","Data":"965b27730d3871b9ead1be60eac6426cee58e81be590869fca9ca5dff06875a0"} Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.487674 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"808069be-f7b4-4e1c-86d2-585915e49a1f","Type":"ContainerDied","Data":"46a8c4ccd615523a5ec960d4d0ba0817a0f7cb2101fb054eaf8a69f30d1f59be"} Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.487709 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.487781 4520 scope.go:117] "RemoveContainer" containerID="72c4b506da37fc371c227be33163d16699d0b85599ffc83a6f4e642a05b2fe48" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.572381 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.602506 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.615398 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.648206 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 07:03:22 crc kubenswrapper[4520]: E0130 07:03:22.648626 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="808069be-f7b4-4e1c-86d2-585915e49a1f" containerName="kube-state-metrics" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.648652 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="808069be-f7b4-4e1c-86d2-585915e49a1f" containerName="kube-state-metrics" Jan 30 07:03:22 crc kubenswrapper[4520]: E0130 07:03:22.648686 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c3b80d9-dfeb-4120-a523-4f4ceea700c8" containerName="init" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.648693 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c3b80d9-dfeb-4120-a523-4f4ceea700c8" containerName="init" Jan 30 07:03:22 crc kubenswrapper[4520]: E0130 07:03:22.648707 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b6d494a-0c95-4c49-ab5e-41ea8ce094ba" containerName="nova-scheduler-scheduler" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.648712 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b6d494a-0c95-4c49-ab5e-41ea8ce094ba" containerName="nova-scheduler-scheduler" Jan 30 07:03:22 crc kubenswrapper[4520]: E0130 07:03:22.648724 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c3b80d9-dfeb-4120-a523-4f4ceea700c8" containerName="dnsmasq-dns" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.648729 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c3b80d9-dfeb-4120-a523-4f4ceea700c8" containerName="dnsmasq-dns" Jan 30 07:03:22 crc kubenswrapper[4520]: E0130 07:03:22.648738 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13d598af-4041-4d4e-8594-56d19d1225f5" containerName="nova-manage" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.648744 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="13d598af-4041-4d4e-8594-56d19d1225f5" containerName="nova-manage" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.648889 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c3b80d9-dfeb-4120-a523-4f4ceea700c8" containerName="dnsmasq-dns" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.648907 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b6d494a-0c95-4c49-ab5e-41ea8ce094ba" containerName="nova-scheduler-scheduler" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.648915 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="808069be-f7b4-4e1c-86d2-585915e49a1f" containerName="kube-state-metrics" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.648923 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="13d598af-4041-4d4e-8594-56d19d1225f5" containerName="nova-manage" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.649586 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.652357 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.652481 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.718198 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c3b80d9-dfeb-4120-a523-4f4ceea700c8" path="/var/lib/kubelet/pods/2c3b80d9-dfeb-4120-a523-4f4ceea700c8/volumes" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.719171 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="808069be-f7b4-4e1c-86d2-585915e49a1f" path="/var/lib/kubelet/pods/808069be-f7b4-4e1c-86d2-585915e49a1f/volumes" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.730988 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.765980 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b6d494a-0c95-4c49-ab5e-41ea8ce094ba-config-data\") pod \"4b6d494a-0c95-4c49-ab5e-41ea8ce094ba\" (UID: \"4b6d494a-0c95-4c49-ab5e-41ea8ce094ba\") " Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.766031 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b6d494a-0c95-4c49-ab5e-41ea8ce094ba-combined-ca-bundle\") pod \"4b6d494a-0c95-4c49-ab5e-41ea8ce094ba\" (UID: \"4b6d494a-0c95-4c49-ab5e-41ea8ce094ba\") " Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.766340 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrs82\" (UniqueName: \"kubernetes.io/projected/4b6d494a-0c95-4c49-ab5e-41ea8ce094ba-kube-api-access-rrs82\") pod \"4b6d494a-0c95-4c49-ab5e-41ea8ce094ba\" (UID: \"4b6d494a-0c95-4c49-ab5e-41ea8ce094ba\") " Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.768122 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb268\" (UniqueName: \"kubernetes.io/projected/2790b738-6242-4208-a94f-be166868cc43-kube-api-access-gb268\") pod \"kube-state-metrics-0\" (UID: \"2790b738-6242-4208-a94f-be166868cc43\") " pod="openstack/kube-state-metrics-0" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.768319 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/2790b738-6242-4208-a94f-be166868cc43-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"2790b738-6242-4208-a94f-be166868cc43\") " pod="openstack/kube-state-metrics-0" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.768356 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2790b738-6242-4208-a94f-be166868cc43-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"2790b738-6242-4208-a94f-be166868cc43\") " pod="openstack/kube-state-metrics-0" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.768499 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/2790b738-6242-4208-a94f-be166868cc43-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"2790b738-6242-4208-a94f-be166868cc43\") " pod="openstack/kube-state-metrics-0" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.786946 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b6d494a-0c95-4c49-ab5e-41ea8ce094ba-kube-api-access-rrs82" (OuterVolumeSpecName: "kube-api-access-rrs82") pod "4b6d494a-0c95-4c49-ab5e-41ea8ce094ba" (UID: "4b6d494a-0c95-4c49-ab5e-41ea8ce094ba"). InnerVolumeSpecName "kube-api-access-rrs82". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.816311 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b6d494a-0c95-4c49-ab5e-41ea8ce094ba-config-data" (OuterVolumeSpecName: "config-data") pod "4b6d494a-0c95-4c49-ab5e-41ea8ce094ba" (UID: "4b6d494a-0c95-4c49-ab5e-41ea8ce094ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.835624 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b6d494a-0c95-4c49-ab5e-41ea8ce094ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b6d494a-0c95-4c49-ab5e-41ea8ce094ba" (UID: "4b6d494a-0c95-4c49-ab5e-41ea8ce094ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.870291 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb268\" (UniqueName: \"kubernetes.io/projected/2790b738-6242-4208-a94f-be166868cc43-kube-api-access-gb268\") pod \"kube-state-metrics-0\" (UID: \"2790b738-6242-4208-a94f-be166868cc43\") " pod="openstack/kube-state-metrics-0" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.870441 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/2790b738-6242-4208-a94f-be166868cc43-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"2790b738-6242-4208-a94f-be166868cc43\") " pod="openstack/kube-state-metrics-0" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.870477 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2790b738-6242-4208-a94f-be166868cc43-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"2790b738-6242-4208-a94f-be166868cc43\") " pod="openstack/kube-state-metrics-0" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.870590 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/2790b738-6242-4208-a94f-be166868cc43-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"2790b738-6242-4208-a94f-be166868cc43\") " pod="openstack/kube-state-metrics-0" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.870684 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrs82\" (UniqueName: \"kubernetes.io/projected/4b6d494a-0c95-4c49-ab5e-41ea8ce094ba-kube-api-access-rrs82\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.870700 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b6d494a-0c95-4c49-ab5e-41ea8ce094ba-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.870710 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b6d494a-0c95-4c49-ab5e-41ea8ce094ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.874319 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/2790b738-6242-4208-a94f-be166868cc43-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"2790b738-6242-4208-a94f-be166868cc43\") " pod="openstack/kube-state-metrics-0" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.876125 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/2790b738-6242-4208-a94f-be166868cc43-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"2790b738-6242-4208-a94f-be166868cc43\") " pod="openstack/kube-state-metrics-0" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.888248 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2790b738-6242-4208-a94f-be166868cc43-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"2790b738-6242-4208-a94f-be166868cc43\") " pod="openstack/kube-state-metrics-0" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.889681 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb268\" (UniqueName: \"kubernetes.io/projected/2790b738-6242-4208-a94f-be166868cc43-kube-api-access-gb268\") pod \"kube-state-metrics-0\" (UID: \"2790b738-6242-4208-a94f-be166868cc43\") " pod="openstack/kube-state-metrics-0" Jan 30 07:03:22 crc kubenswrapper[4520]: I0130 07:03:22.964897 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 07:03:23 crc kubenswrapper[4520]: I0130 07:03:23.504865 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4b6d494a-0c95-4c49-ab5e-41ea8ce094ba","Type":"ContainerDied","Data":"6449ab48c9b459bc39a82bbb86baad190a42b10af9b08f68efd84b98e6e4ba64"} Jan 30 07:03:23 crc kubenswrapper[4520]: I0130 07:03:23.505227 4520 scope.go:117] "RemoveContainer" containerID="140c2644f6424f5407d1d2a9ae130c017e4959f0fa96f42b0a75b0679bc8bcea" Jan 30 07:03:23 crc kubenswrapper[4520]: I0130 07:03:23.504917 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 07:03:23 crc kubenswrapper[4520]: I0130 07:03:23.510757 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 07:03:23 crc kubenswrapper[4520]: I0130 07:03:23.549923 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 07:03:23 crc kubenswrapper[4520]: I0130 07:03:23.564267 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 07:03:23 crc kubenswrapper[4520]: I0130 07:03:23.569277 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 07:03:23 crc kubenswrapper[4520]: I0130 07:03:23.570629 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 07:03:23 crc kubenswrapper[4520]: I0130 07:03:23.574031 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 07:03:23 crc kubenswrapper[4520]: I0130 07:03:23.574680 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 07:03:23 crc kubenswrapper[4520]: I0130 07:03:23.583544 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e5f699-f08a-4fe0-9d66-f110745cab69-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e5e5f699-f08a-4fe0-9d66-f110745cab69\") " pod="openstack/nova-scheduler-0" Jan 30 07:03:23 crc kubenswrapper[4520]: I0130 07:03:23.583703 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5e5f699-f08a-4fe0-9d66-f110745cab69-config-data\") pod \"nova-scheduler-0\" (UID: \"e5e5f699-f08a-4fe0-9d66-f110745cab69\") " pod="openstack/nova-scheduler-0" Jan 30 07:03:23 crc kubenswrapper[4520]: I0130 07:03:23.583788 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl6sp\" (UniqueName: \"kubernetes.io/projected/e5e5f699-f08a-4fe0-9d66-f110745cab69-kube-api-access-jl6sp\") pod \"nova-scheduler-0\" (UID: \"e5e5f699-f08a-4fe0-9d66-f110745cab69\") " pod="openstack/nova-scheduler-0" Jan 30 07:03:23 crc kubenswrapper[4520]: I0130 07:03:23.687358 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e5f699-f08a-4fe0-9d66-f110745cab69-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e5e5f699-f08a-4fe0-9d66-f110745cab69\") " pod="openstack/nova-scheduler-0" Jan 30 07:03:23 crc kubenswrapper[4520]: I0130 07:03:23.689485 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5e5f699-f08a-4fe0-9d66-f110745cab69-config-data\") pod \"nova-scheduler-0\" (UID: \"e5e5f699-f08a-4fe0-9d66-f110745cab69\") " pod="openstack/nova-scheduler-0" Jan 30 07:03:23 crc kubenswrapper[4520]: I0130 07:03:23.689620 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl6sp\" (UniqueName: \"kubernetes.io/projected/e5e5f699-f08a-4fe0-9d66-f110745cab69-kube-api-access-jl6sp\") pod \"nova-scheduler-0\" (UID: \"e5e5f699-f08a-4fe0-9d66-f110745cab69\") " pod="openstack/nova-scheduler-0" Jan 30 07:03:23 crc kubenswrapper[4520]: I0130 07:03:23.695246 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e5f699-f08a-4fe0-9d66-f110745cab69-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e5e5f699-f08a-4fe0-9d66-f110745cab69\") " pod="openstack/nova-scheduler-0" Jan 30 07:03:23 crc kubenswrapper[4520]: I0130 07:03:23.700072 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5e5f699-f08a-4fe0-9d66-f110745cab69-config-data\") pod \"nova-scheduler-0\" (UID: \"e5e5f699-f08a-4fe0-9d66-f110745cab69\") " pod="openstack/nova-scheduler-0" Jan 30 07:03:23 crc kubenswrapper[4520]: I0130 07:03:23.707665 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl6sp\" (UniqueName: \"kubernetes.io/projected/e5e5f699-f08a-4fe0-9d66-f110745cab69-kube-api-access-jl6sp\") pod \"nova-scheduler-0\" (UID: \"e5e5f699-f08a-4fe0-9d66-f110745cab69\") " pod="openstack/nova-scheduler-0" Jan 30 07:03:23 crc kubenswrapper[4520]: I0130 07:03:23.913059 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 07:03:24 crc kubenswrapper[4520]: I0130 07:03:24.425068 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 07:03:24 crc kubenswrapper[4520]: I0130 07:03:24.522529 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e5e5f699-f08a-4fe0-9d66-f110745cab69","Type":"ContainerStarted","Data":"2afa7bd3b397887472dc363312eb007365b130a0a79d2588a81261fc509ab0a7"} Jan 30 07:03:24 crc kubenswrapper[4520]: I0130 07:03:24.524692 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 07:03:24 crc kubenswrapper[4520]: I0130 07:03:24.525342 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2790b738-6242-4208-a94f-be166868cc43","Type":"ContainerStarted","Data":"166867931bacaf75a849cfd65d173b07f274f6d2b8c1e4aceef9e61a88131979"} Jan 30 07:03:24 crc kubenswrapper[4520]: I0130 07:03:24.525405 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2790b738-6242-4208-a94f-be166868cc43","Type":"ContainerStarted","Data":"e1159dc0c7e621faed330ebbaad7ca51161cbc29f28bec957107a78a9aa81f67"} Jan 30 07:03:24 crc kubenswrapper[4520]: I0130 07:03:24.540376 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.237608788 podStartE2EDuration="2.540359085s" podCreationTimestamp="2026-01-30 07:03:22 +0000 UTC" firstStartedPulling="2026-01-30 07:03:23.515526539 +0000 UTC m=+1117.143878720" lastFinishedPulling="2026-01-30 07:03:23.818276836 +0000 UTC m=+1117.446629017" observedRunningTime="2026-01-30 07:03:24.537472147 +0000 UTC m=+1118.165824328" watchObservedRunningTime="2026-01-30 07:03:24.540359085 +0000 UTC m=+1118.168711266" Jan 30 07:03:24 crc kubenswrapper[4520]: I0130 07:03:24.699916 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b6d494a-0c95-4c49-ab5e-41ea8ce094ba" path="/var/lib/kubelet/pods/4b6d494a-0c95-4c49-ab5e-41ea8ce094ba/volumes" Jan 30 07:03:24 crc kubenswrapper[4520]: I0130 07:03:24.833048 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:03:24 crc kubenswrapper[4520]: I0130 07:03:24.833332 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd728108-debc-4baa-8a2d-b82733e5976a" containerName="ceilometer-central-agent" containerID="cri-o://e1f080a10a66b1b3dc3ea1e67e08a09bdb6e76dabd0c4a0328e7b666de664701" gracePeriod=30 Jan 30 07:03:24 crc kubenswrapper[4520]: I0130 07:03:24.833395 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd728108-debc-4baa-8a2d-b82733e5976a" containerName="proxy-httpd" containerID="cri-o://18c9cfbcf1eaafc8c67163a0eaefa32a263a0e085206b695d46fd1e64342a6d5" gracePeriod=30 Jan 30 07:03:24 crc kubenswrapper[4520]: I0130 07:03:24.833423 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd728108-debc-4baa-8a2d-b82733e5976a" containerName="sg-core" containerID="cri-o://9ddc6beb0373bd8aeb239ad6737680c1d693e35932c291c632094400fe8ce7d4" gracePeriod=30 Jan 30 07:03:24 crc kubenswrapper[4520]: I0130 07:03:24.833462 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd728108-debc-4baa-8a2d-b82733e5976a" containerName="ceilometer-notification-agent" containerID="cri-o://7b59a1adbf7b5535f0afda8ddf36ac7a9ab337c330af70bc47c33af8ad631f1c" gracePeriod=30 Jan 30 07:03:25 crc kubenswrapper[4520]: I0130 07:03:25.534575 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e5e5f699-f08a-4fe0-9d66-f110745cab69","Type":"ContainerStarted","Data":"bae25460cad86dce479048d205233de7f05d7f2bc06f429561ee4e67dbfdf3d7"} Jan 30 07:03:25 crc kubenswrapper[4520]: I0130 07:03:25.538577 4520 generic.go:334] "Generic (PLEG): container finished" podID="fd728108-debc-4baa-8a2d-b82733e5976a" containerID="18c9cfbcf1eaafc8c67163a0eaefa32a263a0e085206b695d46fd1e64342a6d5" exitCode=0 Jan 30 07:03:25 crc kubenswrapper[4520]: I0130 07:03:25.538604 4520 generic.go:334] "Generic (PLEG): container finished" podID="fd728108-debc-4baa-8a2d-b82733e5976a" containerID="9ddc6beb0373bd8aeb239ad6737680c1d693e35932c291c632094400fe8ce7d4" exitCode=2 Jan 30 07:03:25 crc kubenswrapper[4520]: I0130 07:03:25.538612 4520 generic.go:334] "Generic (PLEG): container finished" podID="fd728108-debc-4baa-8a2d-b82733e5976a" containerID="e1f080a10a66b1b3dc3ea1e67e08a09bdb6e76dabd0c4a0328e7b666de664701" exitCode=0 Jan 30 07:03:25 crc kubenswrapper[4520]: I0130 07:03:25.539061 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd728108-debc-4baa-8a2d-b82733e5976a","Type":"ContainerDied","Data":"18c9cfbcf1eaafc8c67163a0eaefa32a263a0e085206b695d46fd1e64342a6d5"} Jan 30 07:03:25 crc kubenswrapper[4520]: I0130 07:03:25.539090 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd728108-debc-4baa-8a2d-b82733e5976a","Type":"ContainerDied","Data":"9ddc6beb0373bd8aeb239ad6737680c1d693e35932c291c632094400fe8ce7d4"} Jan 30 07:03:25 crc kubenswrapper[4520]: I0130 07:03:25.539101 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd728108-debc-4baa-8a2d-b82733e5976a","Type":"ContainerDied","Data":"e1f080a10a66b1b3dc3ea1e67e08a09bdb6e76dabd0c4a0328e7b666de664701"} Jan 30 07:03:27 crc kubenswrapper[4520]: I0130 07:03:27.541218 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 07:03:27 crc kubenswrapper[4520]: I0130 07:03:27.562692 4520 generic.go:334] "Generic (PLEG): container finished" podID="f62fce94-0031-431f-a8a9-213c4b0b4a2e" containerID="2988d5f082ad6a16195cc9607124bb4bec4a6b0f60aec9edbf2a54077fe2a7b3" exitCode=0 Jan 30 07:03:27 crc kubenswrapper[4520]: I0130 07:03:27.562769 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 07:03:27 crc kubenswrapper[4520]: I0130 07:03:27.562774 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f62fce94-0031-431f-a8a9-213c4b0b4a2e","Type":"ContainerDied","Data":"2988d5f082ad6a16195cc9607124bb4bec4a6b0f60aec9edbf2a54077fe2a7b3"} Jan 30 07:03:27 crc kubenswrapper[4520]: I0130 07:03:27.562898 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f62fce94-0031-431f-a8a9-213c4b0b4a2e","Type":"ContainerDied","Data":"1e81b27fd80f4f8ca9321b0317652bae7d1a15656ef0c3ea26482c2d9f73993d"} Jan 30 07:03:27 crc kubenswrapper[4520]: I0130 07:03:27.562940 4520 scope.go:117] "RemoveContainer" containerID="2988d5f082ad6a16195cc9607124bb4bec4a6b0f60aec9edbf2a54077fe2a7b3" Jan 30 07:03:27 crc kubenswrapper[4520]: I0130 07:03:27.577161 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=4.577144244 podStartE2EDuration="4.577144244s" podCreationTimestamp="2026-01-30 07:03:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:03:25.55931891 +0000 UTC m=+1119.187671091" watchObservedRunningTime="2026-01-30 07:03:27.577144244 +0000 UTC m=+1121.205496425" Jan 30 07:03:27 crc kubenswrapper[4520]: I0130 07:03:27.585046 4520 scope.go:117] "RemoveContainer" containerID="965b27730d3871b9ead1be60eac6426cee58e81be590869fca9ca5dff06875a0" Jan 30 07:03:27 crc kubenswrapper[4520]: I0130 07:03:27.622091 4520 scope.go:117] "RemoveContainer" containerID="2988d5f082ad6a16195cc9607124bb4bec4a6b0f60aec9edbf2a54077fe2a7b3" Jan 30 07:03:27 crc kubenswrapper[4520]: E0130 07:03:27.622802 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2988d5f082ad6a16195cc9607124bb4bec4a6b0f60aec9edbf2a54077fe2a7b3\": container with ID starting with 2988d5f082ad6a16195cc9607124bb4bec4a6b0f60aec9edbf2a54077fe2a7b3 not found: ID does not exist" containerID="2988d5f082ad6a16195cc9607124bb4bec4a6b0f60aec9edbf2a54077fe2a7b3" Jan 30 07:03:27 crc kubenswrapper[4520]: I0130 07:03:27.622840 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2988d5f082ad6a16195cc9607124bb4bec4a6b0f60aec9edbf2a54077fe2a7b3"} err="failed to get container status \"2988d5f082ad6a16195cc9607124bb4bec4a6b0f60aec9edbf2a54077fe2a7b3\": rpc error: code = NotFound desc = could not find container \"2988d5f082ad6a16195cc9607124bb4bec4a6b0f60aec9edbf2a54077fe2a7b3\": container with ID starting with 2988d5f082ad6a16195cc9607124bb4bec4a6b0f60aec9edbf2a54077fe2a7b3 not found: ID does not exist" Jan 30 07:03:27 crc kubenswrapper[4520]: I0130 07:03:27.622867 4520 scope.go:117] "RemoveContainer" containerID="965b27730d3871b9ead1be60eac6426cee58e81be590869fca9ca5dff06875a0" Jan 30 07:03:27 crc kubenswrapper[4520]: E0130 07:03:27.623100 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"965b27730d3871b9ead1be60eac6426cee58e81be590869fca9ca5dff06875a0\": container with ID starting with 965b27730d3871b9ead1be60eac6426cee58e81be590869fca9ca5dff06875a0 not found: ID does not exist" containerID="965b27730d3871b9ead1be60eac6426cee58e81be590869fca9ca5dff06875a0" Jan 30 07:03:27 crc kubenswrapper[4520]: I0130 07:03:27.623125 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"965b27730d3871b9ead1be60eac6426cee58e81be590869fca9ca5dff06875a0"} err="failed to get container status \"965b27730d3871b9ead1be60eac6426cee58e81be590869fca9ca5dff06875a0\": rpc error: code = NotFound desc = could not find container \"965b27730d3871b9ead1be60eac6426cee58e81be590869fca9ca5dff06875a0\": container with ID starting with 965b27730d3871b9ead1be60eac6426cee58e81be590869fca9ca5dff06875a0 not found: ID does not exist" Jan 30 07:03:27 crc kubenswrapper[4520]: I0130 07:03:27.677201 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f62fce94-0031-431f-a8a9-213c4b0b4a2e-config-data\") pod \"f62fce94-0031-431f-a8a9-213c4b0b4a2e\" (UID: \"f62fce94-0031-431f-a8a9-213c4b0b4a2e\") " Jan 30 07:03:27 crc kubenswrapper[4520]: I0130 07:03:27.677310 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f62fce94-0031-431f-a8a9-213c4b0b4a2e-combined-ca-bundle\") pod \"f62fce94-0031-431f-a8a9-213c4b0b4a2e\" (UID: \"f62fce94-0031-431f-a8a9-213c4b0b4a2e\") " Jan 30 07:03:27 crc kubenswrapper[4520]: I0130 07:03:27.677379 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsdwd\" (UniqueName: \"kubernetes.io/projected/f62fce94-0031-431f-a8a9-213c4b0b4a2e-kube-api-access-lsdwd\") pod \"f62fce94-0031-431f-a8a9-213c4b0b4a2e\" (UID: \"f62fce94-0031-431f-a8a9-213c4b0b4a2e\") " Jan 30 07:03:27 crc kubenswrapper[4520]: I0130 07:03:27.677616 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f62fce94-0031-431f-a8a9-213c4b0b4a2e-logs\") pod \"f62fce94-0031-431f-a8a9-213c4b0b4a2e\" (UID: \"f62fce94-0031-431f-a8a9-213c4b0b4a2e\") " Jan 30 07:03:27 crc kubenswrapper[4520]: I0130 07:03:27.678761 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f62fce94-0031-431f-a8a9-213c4b0b4a2e-logs" (OuterVolumeSpecName: "logs") pod "f62fce94-0031-431f-a8a9-213c4b0b4a2e" (UID: "f62fce94-0031-431f-a8a9-213c4b0b4a2e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:03:27 crc kubenswrapper[4520]: I0130 07:03:27.685676 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f62fce94-0031-431f-a8a9-213c4b0b4a2e-kube-api-access-lsdwd" (OuterVolumeSpecName: "kube-api-access-lsdwd") pod "f62fce94-0031-431f-a8a9-213c4b0b4a2e" (UID: "f62fce94-0031-431f-a8a9-213c4b0b4a2e"). InnerVolumeSpecName "kube-api-access-lsdwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:03:27 crc kubenswrapper[4520]: I0130 07:03:27.704379 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f62fce94-0031-431f-a8a9-213c4b0b4a2e-config-data" (OuterVolumeSpecName: "config-data") pod "f62fce94-0031-431f-a8a9-213c4b0b4a2e" (UID: "f62fce94-0031-431f-a8a9-213c4b0b4a2e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:03:27 crc kubenswrapper[4520]: I0130 07:03:27.709750 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f62fce94-0031-431f-a8a9-213c4b0b4a2e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f62fce94-0031-431f-a8a9-213c4b0b4a2e" (UID: "f62fce94-0031-431f-a8a9-213c4b0b4a2e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:03:27 crc kubenswrapper[4520]: I0130 07:03:27.788916 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f62fce94-0031-431f-a8a9-213c4b0b4a2e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:27 crc kubenswrapper[4520]: I0130 07:03:27.788949 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f62fce94-0031-431f-a8a9-213c4b0b4a2e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:27 crc kubenswrapper[4520]: I0130 07:03:27.788973 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsdwd\" (UniqueName: \"kubernetes.io/projected/f62fce94-0031-431f-a8a9-213c4b0b4a2e-kube-api-access-lsdwd\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:27 crc kubenswrapper[4520]: I0130 07:03:27.788984 4520 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f62fce94-0031-431f-a8a9-213c4b0b4a2e-logs\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:27 crc kubenswrapper[4520]: I0130 07:03:27.910741 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 07:03:27 crc kubenswrapper[4520]: I0130 07:03:27.926387 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 07:03:27 crc kubenswrapper[4520]: I0130 07:03:27.939630 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 07:03:27 crc kubenswrapper[4520]: E0130 07:03:27.940335 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f62fce94-0031-431f-a8a9-213c4b0b4a2e" containerName="nova-api-log" Jan 30 07:03:27 crc kubenswrapper[4520]: I0130 07:03:27.940364 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="f62fce94-0031-431f-a8a9-213c4b0b4a2e" containerName="nova-api-log" Jan 30 07:03:27 crc kubenswrapper[4520]: E0130 07:03:27.940409 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f62fce94-0031-431f-a8a9-213c4b0b4a2e" containerName="nova-api-api" Jan 30 07:03:27 crc kubenswrapper[4520]: I0130 07:03:27.940417 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="f62fce94-0031-431f-a8a9-213c4b0b4a2e" containerName="nova-api-api" Jan 30 07:03:27 crc kubenswrapper[4520]: I0130 07:03:27.940722 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="f62fce94-0031-431f-a8a9-213c4b0b4a2e" containerName="nova-api-log" Jan 30 07:03:27 crc kubenswrapper[4520]: I0130 07:03:27.940752 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="f62fce94-0031-431f-a8a9-213c4b0b4a2e" containerName="nova-api-api" Jan 30 07:03:27 crc kubenswrapper[4520]: I0130 07:03:27.942402 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 07:03:27 crc kubenswrapper[4520]: I0130 07:03:27.944264 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 07:03:27 crc kubenswrapper[4520]: I0130 07:03:27.946394 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 07:03:28 crc kubenswrapper[4520]: I0130 07:03:28.097622 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ac0901c-1c9d-41c2-bbf6-88ee904873b2-config-data\") pod \"nova-api-0\" (UID: \"6ac0901c-1c9d-41c2-bbf6-88ee904873b2\") " pod="openstack/nova-api-0" Jan 30 07:03:28 crc kubenswrapper[4520]: I0130 07:03:28.098053 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ac0901c-1c9d-41c2-bbf6-88ee904873b2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6ac0901c-1c9d-41c2-bbf6-88ee904873b2\") " pod="openstack/nova-api-0" Jan 30 07:03:28 crc kubenswrapper[4520]: I0130 07:03:28.098144 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ac0901c-1c9d-41c2-bbf6-88ee904873b2-logs\") pod \"nova-api-0\" (UID: \"6ac0901c-1c9d-41c2-bbf6-88ee904873b2\") " pod="openstack/nova-api-0" Jan 30 07:03:28 crc kubenswrapper[4520]: I0130 07:03:28.098169 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sg5p\" (UniqueName: \"kubernetes.io/projected/6ac0901c-1c9d-41c2-bbf6-88ee904873b2-kube-api-access-9sg5p\") pod \"nova-api-0\" (UID: \"6ac0901c-1c9d-41c2-bbf6-88ee904873b2\") " pod="openstack/nova-api-0" Jan 30 07:03:28 crc kubenswrapper[4520]: I0130 07:03:28.199093 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ac0901c-1c9d-41c2-bbf6-88ee904873b2-logs\") pod \"nova-api-0\" (UID: \"6ac0901c-1c9d-41c2-bbf6-88ee904873b2\") " pod="openstack/nova-api-0" Jan 30 07:03:28 crc kubenswrapper[4520]: I0130 07:03:28.199141 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sg5p\" (UniqueName: \"kubernetes.io/projected/6ac0901c-1c9d-41c2-bbf6-88ee904873b2-kube-api-access-9sg5p\") pod \"nova-api-0\" (UID: \"6ac0901c-1c9d-41c2-bbf6-88ee904873b2\") " pod="openstack/nova-api-0" Jan 30 07:03:28 crc kubenswrapper[4520]: I0130 07:03:28.199190 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ac0901c-1c9d-41c2-bbf6-88ee904873b2-config-data\") pod \"nova-api-0\" (UID: \"6ac0901c-1c9d-41c2-bbf6-88ee904873b2\") " pod="openstack/nova-api-0" Jan 30 07:03:28 crc kubenswrapper[4520]: I0130 07:03:28.199236 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ac0901c-1c9d-41c2-bbf6-88ee904873b2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6ac0901c-1c9d-41c2-bbf6-88ee904873b2\") " pod="openstack/nova-api-0" Jan 30 07:03:28 crc kubenswrapper[4520]: I0130 07:03:28.200626 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ac0901c-1c9d-41c2-bbf6-88ee904873b2-logs\") pod \"nova-api-0\" (UID: \"6ac0901c-1c9d-41c2-bbf6-88ee904873b2\") " pod="openstack/nova-api-0" Jan 30 07:03:28 crc kubenswrapper[4520]: I0130 07:03:28.204473 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ac0901c-1c9d-41c2-bbf6-88ee904873b2-config-data\") pod \"nova-api-0\" (UID: \"6ac0901c-1c9d-41c2-bbf6-88ee904873b2\") " pod="openstack/nova-api-0" Jan 30 07:03:28 crc kubenswrapper[4520]: I0130 07:03:28.205040 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ac0901c-1c9d-41c2-bbf6-88ee904873b2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6ac0901c-1c9d-41c2-bbf6-88ee904873b2\") " pod="openstack/nova-api-0" Jan 30 07:03:28 crc kubenswrapper[4520]: I0130 07:03:28.218029 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sg5p\" (UniqueName: \"kubernetes.io/projected/6ac0901c-1c9d-41c2-bbf6-88ee904873b2-kube-api-access-9sg5p\") pod \"nova-api-0\" (UID: \"6ac0901c-1c9d-41c2-bbf6-88ee904873b2\") " pod="openstack/nova-api-0" Jan 30 07:03:28 crc kubenswrapper[4520]: I0130 07:03:28.278021 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 07:03:28 crc kubenswrapper[4520]: I0130 07:03:28.706477 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f62fce94-0031-431f-a8a9-213c4b0b4a2e" path="/var/lib/kubelet/pods/f62fce94-0031-431f-a8a9-213c4b0b4a2e/volumes" Jan 30 07:03:28 crc kubenswrapper[4520]: I0130 07:03:28.914732 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.201628 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.315018 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.327294 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd728108-debc-4baa-8a2d-b82733e5976a-run-httpd\") pod \"fd728108-debc-4baa-8a2d-b82733e5976a\" (UID: \"fd728108-debc-4baa-8a2d-b82733e5976a\") " Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.327650 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd728108-debc-4baa-8a2d-b82733e5976a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fd728108-debc-4baa-8a2d-b82733e5976a" (UID: "fd728108-debc-4baa-8a2d-b82733e5976a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.327719 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nz7qf\" (UniqueName: \"kubernetes.io/projected/fd728108-debc-4baa-8a2d-b82733e5976a-kube-api-access-nz7qf\") pod \"fd728108-debc-4baa-8a2d-b82733e5976a\" (UID: \"fd728108-debc-4baa-8a2d-b82733e5976a\") " Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.328323 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd728108-debc-4baa-8a2d-b82733e5976a-scripts\") pod \"fd728108-debc-4baa-8a2d-b82733e5976a\" (UID: \"fd728108-debc-4baa-8a2d-b82733e5976a\") " Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.328367 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd728108-debc-4baa-8a2d-b82733e5976a-combined-ca-bundle\") pod \"fd728108-debc-4baa-8a2d-b82733e5976a\" (UID: \"fd728108-debc-4baa-8a2d-b82733e5976a\") " Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.328420 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd728108-debc-4baa-8a2d-b82733e5976a-sg-core-conf-yaml\") pod \"fd728108-debc-4baa-8a2d-b82733e5976a\" (UID: \"fd728108-debc-4baa-8a2d-b82733e5976a\") " Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.328657 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd728108-debc-4baa-8a2d-b82733e5976a-config-data\") pod \"fd728108-debc-4baa-8a2d-b82733e5976a\" (UID: \"fd728108-debc-4baa-8a2d-b82733e5976a\") " Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.328722 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd728108-debc-4baa-8a2d-b82733e5976a-log-httpd\") pod \"fd728108-debc-4baa-8a2d-b82733e5976a\" (UID: \"fd728108-debc-4baa-8a2d-b82733e5976a\") " Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.329381 4520 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd728108-debc-4baa-8a2d-b82733e5976a-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.329782 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd728108-debc-4baa-8a2d-b82733e5976a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fd728108-debc-4baa-8a2d-b82733e5976a" (UID: "fd728108-debc-4baa-8a2d-b82733e5976a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.335062 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd728108-debc-4baa-8a2d-b82733e5976a-scripts" (OuterVolumeSpecName: "scripts") pod "fd728108-debc-4baa-8a2d-b82733e5976a" (UID: "fd728108-debc-4baa-8a2d-b82733e5976a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.335070 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd728108-debc-4baa-8a2d-b82733e5976a-kube-api-access-nz7qf" (OuterVolumeSpecName: "kube-api-access-nz7qf") pod "fd728108-debc-4baa-8a2d-b82733e5976a" (UID: "fd728108-debc-4baa-8a2d-b82733e5976a"). InnerVolumeSpecName "kube-api-access-nz7qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.382623 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd728108-debc-4baa-8a2d-b82733e5976a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fd728108-debc-4baa-8a2d-b82733e5976a" (UID: "fd728108-debc-4baa-8a2d-b82733e5976a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.431155 4520 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd728108-debc-4baa-8a2d-b82733e5976a-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.431185 4520 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd728108-debc-4baa-8a2d-b82733e5976a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.431197 4520 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd728108-debc-4baa-8a2d-b82733e5976a-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.431207 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nz7qf\" (UniqueName: \"kubernetes.io/projected/fd728108-debc-4baa-8a2d-b82733e5976a-kube-api-access-nz7qf\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.469329 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd728108-debc-4baa-8a2d-b82733e5976a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd728108-debc-4baa-8a2d-b82733e5976a" (UID: "fd728108-debc-4baa-8a2d-b82733e5976a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.469888 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd728108-debc-4baa-8a2d-b82733e5976a-config-data" (OuterVolumeSpecName: "config-data") pod "fd728108-debc-4baa-8a2d-b82733e5976a" (UID: "fd728108-debc-4baa-8a2d-b82733e5976a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.533236 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd728108-debc-4baa-8a2d-b82733e5976a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.533266 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd728108-debc-4baa-8a2d-b82733e5976a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.595138 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6ac0901c-1c9d-41c2-bbf6-88ee904873b2","Type":"ContainerStarted","Data":"7c237286d700562c54b3059885ec85659b3877f6fb0ca9f1aa5e8b499087f4df"} Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.595200 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6ac0901c-1c9d-41c2-bbf6-88ee904873b2","Type":"ContainerStarted","Data":"58f6bf2c7ecea2274f11de500ac582f3ba0fd490ec89df31c2e7b300d96594ab"} Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.602921 4520 generic.go:334] "Generic (PLEG): container finished" podID="fd728108-debc-4baa-8a2d-b82733e5976a" containerID="7b59a1adbf7b5535f0afda8ddf36ac7a9ab337c330af70bc47c33af8ad631f1c" exitCode=0 Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.602971 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd728108-debc-4baa-8a2d-b82733e5976a","Type":"ContainerDied","Data":"7b59a1adbf7b5535f0afda8ddf36ac7a9ab337c330af70bc47c33af8ad631f1c"} Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.603004 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd728108-debc-4baa-8a2d-b82733e5976a","Type":"ContainerDied","Data":"d8cdc16e3c8db2888dbfd66f0c61f55bccfd98a17c8984172b2989703d2c2a38"} Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.603026 4520 scope.go:117] "RemoveContainer" containerID="18c9cfbcf1eaafc8c67163a0eaefa32a263a0e085206b695d46fd1e64342a6d5" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.603234 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.668560 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.673771 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.682664 4520 scope.go:117] "RemoveContainer" containerID="9ddc6beb0373bd8aeb239ad6737680c1d693e35932c291c632094400fe8ce7d4" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.707568 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:03:29 crc kubenswrapper[4520]: E0130 07:03:29.708117 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd728108-debc-4baa-8a2d-b82733e5976a" containerName="ceilometer-notification-agent" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.708173 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd728108-debc-4baa-8a2d-b82733e5976a" containerName="ceilometer-notification-agent" Jan 30 07:03:29 crc kubenswrapper[4520]: E0130 07:03:29.708233 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd728108-debc-4baa-8a2d-b82733e5976a" containerName="proxy-httpd" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.708272 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd728108-debc-4baa-8a2d-b82733e5976a" containerName="proxy-httpd" Jan 30 07:03:29 crc kubenswrapper[4520]: E0130 07:03:29.708332 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd728108-debc-4baa-8a2d-b82733e5976a" containerName="sg-core" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.708397 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd728108-debc-4baa-8a2d-b82733e5976a" containerName="sg-core" Jan 30 07:03:29 crc kubenswrapper[4520]: E0130 07:03:29.708437 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd728108-debc-4baa-8a2d-b82733e5976a" containerName="ceilometer-central-agent" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.708473 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd728108-debc-4baa-8a2d-b82733e5976a" containerName="ceilometer-central-agent" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.708741 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd728108-debc-4baa-8a2d-b82733e5976a" containerName="proxy-httpd" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.708800 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd728108-debc-4baa-8a2d-b82733e5976a" containerName="ceilometer-central-agent" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.708848 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd728108-debc-4baa-8a2d-b82733e5976a" containerName="ceilometer-notification-agent" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.708903 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd728108-debc-4baa-8a2d-b82733e5976a" containerName="sg-core" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.710649 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.714877 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.715214 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.715397 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.721323 4520 scope.go:117] "RemoveContainer" containerID="7b59a1adbf7b5535f0afda8ddf36ac7a9ab337c330af70bc47c33af8ad631f1c" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.731387 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.776149 4520 scope.go:117] "RemoveContainer" containerID="e1f080a10a66b1b3dc3ea1e67e08a09bdb6e76dabd0c4a0328e7b666de664701" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.798372 4520 scope.go:117] "RemoveContainer" containerID="18c9cfbcf1eaafc8c67163a0eaefa32a263a0e085206b695d46fd1e64342a6d5" Jan 30 07:03:29 crc kubenswrapper[4520]: E0130 07:03:29.798752 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18c9cfbcf1eaafc8c67163a0eaefa32a263a0e085206b695d46fd1e64342a6d5\": container with ID starting with 18c9cfbcf1eaafc8c67163a0eaefa32a263a0e085206b695d46fd1e64342a6d5 not found: ID does not exist" containerID="18c9cfbcf1eaafc8c67163a0eaefa32a263a0e085206b695d46fd1e64342a6d5" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.798784 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18c9cfbcf1eaafc8c67163a0eaefa32a263a0e085206b695d46fd1e64342a6d5"} err="failed to get container status \"18c9cfbcf1eaafc8c67163a0eaefa32a263a0e085206b695d46fd1e64342a6d5\": rpc error: code = NotFound desc = could not find container \"18c9cfbcf1eaafc8c67163a0eaefa32a263a0e085206b695d46fd1e64342a6d5\": container with ID starting with 18c9cfbcf1eaafc8c67163a0eaefa32a263a0e085206b695d46fd1e64342a6d5 not found: ID does not exist" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.798809 4520 scope.go:117] "RemoveContainer" containerID="9ddc6beb0373bd8aeb239ad6737680c1d693e35932c291c632094400fe8ce7d4" Jan 30 07:03:29 crc kubenswrapper[4520]: E0130 07:03:29.799031 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ddc6beb0373bd8aeb239ad6737680c1d693e35932c291c632094400fe8ce7d4\": container with ID starting with 9ddc6beb0373bd8aeb239ad6737680c1d693e35932c291c632094400fe8ce7d4 not found: ID does not exist" containerID="9ddc6beb0373bd8aeb239ad6737680c1d693e35932c291c632094400fe8ce7d4" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.799069 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ddc6beb0373bd8aeb239ad6737680c1d693e35932c291c632094400fe8ce7d4"} err="failed to get container status \"9ddc6beb0373bd8aeb239ad6737680c1d693e35932c291c632094400fe8ce7d4\": rpc error: code = NotFound desc = could not find container \"9ddc6beb0373bd8aeb239ad6737680c1d693e35932c291c632094400fe8ce7d4\": container with ID starting with 9ddc6beb0373bd8aeb239ad6737680c1d693e35932c291c632094400fe8ce7d4 not found: ID does not exist" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.799085 4520 scope.go:117] "RemoveContainer" containerID="7b59a1adbf7b5535f0afda8ddf36ac7a9ab337c330af70bc47c33af8ad631f1c" Jan 30 07:03:29 crc kubenswrapper[4520]: E0130 07:03:29.799256 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b59a1adbf7b5535f0afda8ddf36ac7a9ab337c330af70bc47c33af8ad631f1c\": container with ID starting with 7b59a1adbf7b5535f0afda8ddf36ac7a9ab337c330af70bc47c33af8ad631f1c not found: ID does not exist" containerID="7b59a1adbf7b5535f0afda8ddf36ac7a9ab337c330af70bc47c33af8ad631f1c" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.799276 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b59a1adbf7b5535f0afda8ddf36ac7a9ab337c330af70bc47c33af8ad631f1c"} err="failed to get container status \"7b59a1adbf7b5535f0afda8ddf36ac7a9ab337c330af70bc47c33af8ad631f1c\": rpc error: code = NotFound desc = could not find container \"7b59a1adbf7b5535f0afda8ddf36ac7a9ab337c330af70bc47c33af8ad631f1c\": container with ID starting with 7b59a1adbf7b5535f0afda8ddf36ac7a9ab337c330af70bc47c33af8ad631f1c not found: ID does not exist" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.799288 4520 scope.go:117] "RemoveContainer" containerID="e1f080a10a66b1b3dc3ea1e67e08a09bdb6e76dabd0c4a0328e7b666de664701" Jan 30 07:03:29 crc kubenswrapper[4520]: E0130 07:03:29.799869 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1f080a10a66b1b3dc3ea1e67e08a09bdb6e76dabd0c4a0328e7b666de664701\": container with ID starting with e1f080a10a66b1b3dc3ea1e67e08a09bdb6e76dabd0c4a0328e7b666de664701 not found: ID does not exist" containerID="e1f080a10a66b1b3dc3ea1e67e08a09bdb6e76dabd0c4a0328e7b666de664701" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.799887 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1f080a10a66b1b3dc3ea1e67e08a09bdb6e76dabd0c4a0328e7b666de664701"} err="failed to get container status \"e1f080a10a66b1b3dc3ea1e67e08a09bdb6e76dabd0c4a0328e7b666de664701\": rpc error: code = NotFound desc = could not find container \"e1f080a10a66b1b3dc3ea1e67e08a09bdb6e76dabd0c4a0328e7b666de664701\": container with ID starting with e1f080a10a66b1b3dc3ea1e67e08a09bdb6e76dabd0c4a0328e7b666de664701 not found: ID does not exist" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.840680 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e14b67b4-bf87-4dad-8452-34b620d4c6aa-run-httpd\") pod \"ceilometer-0\" (UID: \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\") " pod="openstack/ceilometer-0" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.840728 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e14b67b4-bf87-4dad-8452-34b620d4c6aa-log-httpd\") pod \"ceilometer-0\" (UID: \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\") " pod="openstack/ceilometer-0" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.840780 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e14b67b4-bf87-4dad-8452-34b620d4c6aa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\") " pod="openstack/ceilometer-0" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.840806 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e14b67b4-bf87-4dad-8452-34b620d4c6aa-config-data\") pod \"ceilometer-0\" (UID: \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\") " pod="openstack/ceilometer-0" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.840837 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e14b67b4-bf87-4dad-8452-34b620d4c6aa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\") " pod="openstack/ceilometer-0" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.840864 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj5nl\" (UniqueName: \"kubernetes.io/projected/e14b67b4-bf87-4dad-8452-34b620d4c6aa-kube-api-access-wj5nl\") pod \"ceilometer-0\" (UID: \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\") " pod="openstack/ceilometer-0" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.840891 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e14b67b4-bf87-4dad-8452-34b620d4c6aa-scripts\") pod \"ceilometer-0\" (UID: \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\") " pod="openstack/ceilometer-0" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.840942 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e14b67b4-bf87-4dad-8452-34b620d4c6aa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\") " pod="openstack/ceilometer-0" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.872802 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.943613 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e14b67b4-bf87-4dad-8452-34b620d4c6aa-log-httpd\") pod \"ceilometer-0\" (UID: \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\") " pod="openstack/ceilometer-0" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.944790 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e14b67b4-bf87-4dad-8452-34b620d4c6aa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\") " pod="openstack/ceilometer-0" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.944899 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e14b67b4-bf87-4dad-8452-34b620d4c6aa-config-data\") pod \"ceilometer-0\" (UID: \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\") " pod="openstack/ceilometer-0" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.945020 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e14b67b4-bf87-4dad-8452-34b620d4c6aa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\") " pod="openstack/ceilometer-0" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.945159 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj5nl\" (UniqueName: \"kubernetes.io/projected/e14b67b4-bf87-4dad-8452-34b620d4c6aa-kube-api-access-wj5nl\") pod \"ceilometer-0\" (UID: \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\") " pod="openstack/ceilometer-0" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.945260 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e14b67b4-bf87-4dad-8452-34b620d4c6aa-scripts\") pod \"ceilometer-0\" (UID: \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\") " pod="openstack/ceilometer-0" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.945474 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e14b67b4-bf87-4dad-8452-34b620d4c6aa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\") " pod="openstack/ceilometer-0" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.945659 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e14b67b4-bf87-4dad-8452-34b620d4c6aa-run-httpd\") pod \"ceilometer-0\" (UID: \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\") " pod="openstack/ceilometer-0" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.946259 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e14b67b4-bf87-4dad-8452-34b620d4c6aa-run-httpd\") pod \"ceilometer-0\" (UID: \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\") " pod="openstack/ceilometer-0" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.947559 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e14b67b4-bf87-4dad-8452-34b620d4c6aa-log-httpd\") pod \"ceilometer-0\" (UID: \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\") " pod="openstack/ceilometer-0" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.953945 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e14b67b4-bf87-4dad-8452-34b620d4c6aa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\") " pod="openstack/ceilometer-0" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.954199 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e14b67b4-bf87-4dad-8452-34b620d4c6aa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\") " pod="openstack/ceilometer-0" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.954493 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e14b67b4-bf87-4dad-8452-34b620d4c6aa-config-data\") pod \"ceilometer-0\" (UID: \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\") " pod="openstack/ceilometer-0" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.955436 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e14b67b4-bf87-4dad-8452-34b620d4c6aa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\") " pod="openstack/ceilometer-0" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.957338 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e14b67b4-bf87-4dad-8452-34b620d4c6aa-scripts\") pod \"ceilometer-0\" (UID: \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\") " pod="openstack/ceilometer-0" Jan 30 07:03:29 crc kubenswrapper[4520]: I0130 07:03:29.964260 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj5nl\" (UniqueName: \"kubernetes.io/projected/e14b67b4-bf87-4dad-8452-34b620d4c6aa-kube-api-access-wj5nl\") pod \"ceilometer-0\" (UID: \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\") " pod="openstack/ceilometer-0" Jan 30 07:03:30 crc kubenswrapper[4520]: I0130 07:03:30.034306 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 07:03:30 crc kubenswrapper[4520]: I0130 07:03:30.550937 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:03:30 crc kubenswrapper[4520]: I0130 07:03:30.616995 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6ac0901c-1c9d-41c2-bbf6-88ee904873b2","Type":"ContainerStarted","Data":"c6aa667ba36db1ec85d0fac17d689aad4bcbe6c88d935e4fb0ab74c94d5e57b1"} Jan 30 07:03:30 crc kubenswrapper[4520]: I0130 07:03:30.623379 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e14b67b4-bf87-4dad-8452-34b620d4c6aa","Type":"ContainerStarted","Data":"7150c3d41a6bf4ed421f2b6ea86451b3fcdcfb5cf853912756b9363740def71f"} Jan 30 07:03:30 crc kubenswrapper[4520]: I0130 07:03:30.646437 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.646411661 podStartE2EDuration="3.646411661s" podCreationTimestamp="2026-01-30 07:03:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:03:30.634577849 +0000 UTC m=+1124.262930020" watchObservedRunningTime="2026-01-30 07:03:30.646411661 +0000 UTC m=+1124.274763841" Jan 30 07:03:30 crc kubenswrapper[4520]: I0130 07:03:30.696170 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd728108-debc-4baa-8a2d-b82733e5976a" path="/var/lib/kubelet/pods/fd728108-debc-4baa-8a2d-b82733e5976a/volumes" Jan 30 07:03:31 crc kubenswrapper[4520]: I0130 07:03:31.635927 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e14b67b4-bf87-4dad-8452-34b620d4c6aa","Type":"ContainerStarted","Data":"7af896e4245d7175c804636fcd49752a01fe554524169b224cd07af1b126ce37"} Jan 30 07:03:32 crc kubenswrapper[4520]: I0130 07:03:32.646867 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e14b67b4-bf87-4dad-8452-34b620d4c6aa","Type":"ContainerStarted","Data":"811b4e64bff75eaf79189ad6784ae807ff62003de41f97b999c1a64fc293c7ee"} Jan 30 07:03:33 crc kubenswrapper[4520]: I0130 07:03:33.026707 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 07:03:33 crc kubenswrapper[4520]: I0130 07:03:33.664710 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e14b67b4-bf87-4dad-8452-34b620d4c6aa","Type":"ContainerStarted","Data":"f33b81769c8111f0be20b3aee93ec470184d83d2ed6ad1a074eab7c8efcb3d48"} Jan 30 07:03:33 crc kubenswrapper[4520]: I0130 07:03:33.915156 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 07:03:33 crc kubenswrapper[4520]: I0130 07:03:33.943309 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 07:03:34 crc kubenswrapper[4520]: I0130 07:03:34.710094 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 07:03:35 crc kubenswrapper[4520]: I0130 07:03:35.690911 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e14b67b4-bf87-4dad-8452-34b620d4c6aa","Type":"ContainerStarted","Data":"23074eb10e41ca3a61d705ae1d2be076a68cc2bde5f5eb4638fc8558cda5d7dd"} Jan 30 07:03:35 crc kubenswrapper[4520]: I0130 07:03:35.714396 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.367431822 podStartE2EDuration="6.714374771s" podCreationTimestamp="2026-01-30 07:03:29 +0000 UTC" firstStartedPulling="2026-01-30 07:03:30.563440801 +0000 UTC m=+1124.191792982" lastFinishedPulling="2026-01-30 07:03:34.91038375 +0000 UTC m=+1128.538735931" observedRunningTime="2026-01-30 07:03:35.71038585 +0000 UTC m=+1129.338738031" watchObservedRunningTime="2026-01-30 07:03:35.714374771 +0000 UTC m=+1129.342726942" Jan 30 07:03:36 crc kubenswrapper[4520]: I0130 07:03:36.698921 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 07:03:38 crc kubenswrapper[4520]: I0130 07:03:38.279656 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 07:03:38 crc kubenswrapper[4520]: I0130 07:03:38.279710 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 07:03:39 crc kubenswrapper[4520]: I0130 07:03:39.362743 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6ac0901c-1c9d-41c2-bbf6-88ee904873b2" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.213:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:03:39 crc kubenswrapper[4520]: I0130 07:03:39.362990 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6ac0901c-1c9d-41c2-bbf6-88ee904873b2" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.213:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:03:45 crc kubenswrapper[4520]: I0130 07:03:45.768998 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 07:03:45 crc kubenswrapper[4520]: I0130 07:03:45.774119 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 07:03:45 crc kubenswrapper[4520]: I0130 07:03:45.783808 4520 generic.go:334] "Generic (PLEG): container finished" podID="900d6126-3c05-4fa2-9f32-f444ff2ed311" containerID="5ffb6c3032ea5997072d1939c2ed81b58df86111e3207796b371b2974f1474b7" exitCode=137 Jan 30 07:03:45 crc kubenswrapper[4520]: I0130 07:03:45.783854 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"900d6126-3c05-4fa2-9f32-f444ff2ed311","Type":"ContainerDied","Data":"5ffb6c3032ea5997072d1939c2ed81b58df86111e3207796b371b2974f1474b7"} Jan 30 07:03:45 crc kubenswrapper[4520]: I0130 07:03:45.783881 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"900d6126-3c05-4fa2-9f32-f444ff2ed311","Type":"ContainerDied","Data":"a82f565879785414df410d18ce88519704b43dbeb581b1136b09eaae332ff8c2"} Jan 30 07:03:45 crc kubenswrapper[4520]: I0130 07:03:45.783899 4520 scope.go:117] "RemoveContainer" containerID="5ffb6c3032ea5997072d1939c2ed81b58df86111e3207796b371b2974f1474b7" Jan 30 07:03:45 crc kubenswrapper[4520]: I0130 07:03:45.783987 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 07:03:45 crc kubenswrapper[4520]: I0130 07:03:45.791582 4520 generic.go:334] "Generic (PLEG): container finished" podID="7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83" containerID="b9681c244f7bbb657e949b2d3a6ba253ddaaf49ed8b36d0b25882d828a757106" exitCode=137 Jan 30 07:03:45 crc kubenswrapper[4520]: I0130 07:03:45.791786 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83","Type":"ContainerDied","Data":"b9681c244f7bbb657e949b2d3a6ba253ddaaf49ed8b36d0b25882d828a757106"} Jan 30 07:03:45 crc kubenswrapper[4520]: I0130 07:03:45.791805 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83","Type":"ContainerDied","Data":"40283b214d4f9b0be326d32c3fe35b000b401a068c6046fb71ab275ca567d357"} Jan 30 07:03:45 crc kubenswrapper[4520]: I0130 07:03:45.791864 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 07:03:45 crc kubenswrapper[4520]: I0130 07:03:45.812748 4520 scope.go:117] "RemoveContainer" containerID="5ffb6c3032ea5997072d1939c2ed81b58df86111e3207796b371b2974f1474b7" Jan 30 07:03:45 crc kubenswrapper[4520]: E0130 07:03:45.813102 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ffb6c3032ea5997072d1939c2ed81b58df86111e3207796b371b2974f1474b7\": container with ID starting with 5ffb6c3032ea5997072d1939c2ed81b58df86111e3207796b371b2974f1474b7 not found: ID does not exist" containerID="5ffb6c3032ea5997072d1939c2ed81b58df86111e3207796b371b2974f1474b7" Jan 30 07:03:45 crc kubenswrapper[4520]: I0130 07:03:45.813134 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ffb6c3032ea5997072d1939c2ed81b58df86111e3207796b371b2974f1474b7"} err="failed to get container status \"5ffb6c3032ea5997072d1939c2ed81b58df86111e3207796b371b2974f1474b7\": rpc error: code = NotFound desc = could not find container \"5ffb6c3032ea5997072d1939c2ed81b58df86111e3207796b371b2974f1474b7\": container with ID starting with 5ffb6c3032ea5997072d1939c2ed81b58df86111e3207796b371b2974f1474b7 not found: ID does not exist" Jan 30 07:03:45 crc kubenswrapper[4520]: I0130 07:03:45.813154 4520 scope.go:117] "RemoveContainer" containerID="b9681c244f7bbb657e949b2d3a6ba253ddaaf49ed8b36d0b25882d828a757106" Jan 30 07:03:45 crc kubenswrapper[4520]: I0130 07:03:45.837662 4520 scope.go:117] "RemoveContainer" containerID="99693c6e6476da2807ff4c320b90954d84fd250604b37e8e4e7d7800bef19f5a" Jan 30 07:03:45 crc kubenswrapper[4520]: I0130 07:03:45.866937 4520 scope.go:117] "RemoveContainer" containerID="b9681c244f7bbb657e949b2d3a6ba253ddaaf49ed8b36d0b25882d828a757106" Jan 30 07:03:45 crc kubenswrapper[4520]: E0130 07:03:45.868974 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9681c244f7bbb657e949b2d3a6ba253ddaaf49ed8b36d0b25882d828a757106\": container with ID starting with b9681c244f7bbb657e949b2d3a6ba253ddaaf49ed8b36d0b25882d828a757106 not found: ID does not exist" containerID="b9681c244f7bbb657e949b2d3a6ba253ddaaf49ed8b36d0b25882d828a757106" Jan 30 07:03:45 crc kubenswrapper[4520]: I0130 07:03:45.869025 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9681c244f7bbb657e949b2d3a6ba253ddaaf49ed8b36d0b25882d828a757106"} err="failed to get container status \"b9681c244f7bbb657e949b2d3a6ba253ddaaf49ed8b36d0b25882d828a757106\": rpc error: code = NotFound desc = could not find container \"b9681c244f7bbb657e949b2d3a6ba253ddaaf49ed8b36d0b25882d828a757106\": container with ID starting with b9681c244f7bbb657e949b2d3a6ba253ddaaf49ed8b36d0b25882d828a757106 not found: ID does not exist" Jan 30 07:03:45 crc kubenswrapper[4520]: I0130 07:03:45.869093 4520 scope.go:117] "RemoveContainer" containerID="99693c6e6476da2807ff4c320b90954d84fd250604b37e8e4e7d7800bef19f5a" Jan 30 07:03:45 crc kubenswrapper[4520]: E0130 07:03:45.870397 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99693c6e6476da2807ff4c320b90954d84fd250604b37e8e4e7d7800bef19f5a\": container with ID starting with 99693c6e6476da2807ff4c320b90954d84fd250604b37e8e4e7d7800bef19f5a not found: ID does not exist" containerID="99693c6e6476da2807ff4c320b90954d84fd250604b37e8e4e7d7800bef19f5a" Jan 30 07:03:45 crc kubenswrapper[4520]: I0130 07:03:45.870423 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99693c6e6476da2807ff4c320b90954d84fd250604b37e8e4e7d7800bef19f5a"} err="failed to get container status \"99693c6e6476da2807ff4c320b90954d84fd250604b37e8e4e7d7800bef19f5a\": rpc error: code = NotFound desc = could not find container \"99693c6e6476da2807ff4c320b90954d84fd250604b37e8e4e7d7800bef19f5a\": container with ID starting with 99693c6e6476da2807ff4c320b90954d84fd250604b37e8e4e7d7800bef19f5a not found: ID does not exist" Jan 30 07:03:45 crc kubenswrapper[4520]: I0130 07:03:45.959810 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btk7r\" (UniqueName: \"kubernetes.io/projected/7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83-kube-api-access-btk7r\") pod \"7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83\" (UID: \"7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83\") " Jan 30 07:03:45 crc kubenswrapper[4520]: I0130 07:03:45.959924 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83-config-data\") pod \"7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83\" (UID: \"7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83\") " Jan 30 07:03:45 crc kubenswrapper[4520]: I0130 07:03:45.959958 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900d6126-3c05-4fa2-9f32-f444ff2ed311-combined-ca-bundle\") pod \"900d6126-3c05-4fa2-9f32-f444ff2ed311\" (UID: \"900d6126-3c05-4fa2-9f32-f444ff2ed311\") " Jan 30 07:03:45 crc kubenswrapper[4520]: I0130 07:03:45.960015 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83-logs\") pod \"7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83\" (UID: \"7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83\") " Jan 30 07:03:45 crc kubenswrapper[4520]: I0130 07:03:45.960066 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83-combined-ca-bundle\") pod \"7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83\" (UID: \"7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83\") " Jan 30 07:03:45 crc kubenswrapper[4520]: I0130 07:03:45.960143 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900d6126-3c05-4fa2-9f32-f444ff2ed311-config-data\") pod \"900d6126-3c05-4fa2-9f32-f444ff2ed311\" (UID: \"900d6126-3c05-4fa2-9f32-f444ff2ed311\") " Jan 30 07:03:45 crc kubenswrapper[4520]: I0130 07:03:45.960205 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t28ll\" (UniqueName: \"kubernetes.io/projected/900d6126-3c05-4fa2-9f32-f444ff2ed311-kube-api-access-t28ll\") pod \"900d6126-3c05-4fa2-9f32-f444ff2ed311\" (UID: \"900d6126-3c05-4fa2-9f32-f444ff2ed311\") " Jan 30 07:03:45 crc kubenswrapper[4520]: I0130 07:03:45.960747 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83-logs" (OuterVolumeSpecName: "logs") pod "7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83" (UID: "7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:03:45 crc kubenswrapper[4520]: I0130 07:03:45.977816 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/900d6126-3c05-4fa2-9f32-f444ff2ed311-kube-api-access-t28ll" (OuterVolumeSpecName: "kube-api-access-t28ll") pod "900d6126-3c05-4fa2-9f32-f444ff2ed311" (UID: "900d6126-3c05-4fa2-9f32-f444ff2ed311"). InnerVolumeSpecName "kube-api-access-t28ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:03:45 crc kubenswrapper[4520]: I0130 07:03:45.977884 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83-kube-api-access-btk7r" (OuterVolumeSpecName: "kube-api-access-btk7r") pod "7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83" (UID: "7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83"). InnerVolumeSpecName "kube-api-access-btk7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:03:45 crc kubenswrapper[4520]: I0130 07:03:45.984245 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900d6126-3c05-4fa2-9f32-f444ff2ed311-config-data" (OuterVolumeSpecName: "config-data") pod "900d6126-3c05-4fa2-9f32-f444ff2ed311" (UID: "900d6126-3c05-4fa2-9f32-f444ff2ed311"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:03:45 crc kubenswrapper[4520]: I0130 07:03:45.987678 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83" (UID: "7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:03:45 crc kubenswrapper[4520]: I0130 07:03:45.990113 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900d6126-3c05-4fa2-9f32-f444ff2ed311-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "900d6126-3c05-4fa2-9f32-f444ff2ed311" (UID: "900d6126-3c05-4fa2-9f32-f444ff2ed311"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:03:45 crc kubenswrapper[4520]: I0130 07:03:45.993874 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83-config-data" (OuterVolumeSpecName: "config-data") pod "7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83" (UID: "7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.063884 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btk7r\" (UniqueName: \"kubernetes.io/projected/7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83-kube-api-access-btk7r\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.063996 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.064053 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900d6126-3c05-4fa2-9f32-f444ff2ed311-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.064102 4520 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83-logs\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.064158 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.064211 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900d6126-3c05-4fa2-9f32-f444ff2ed311-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.064258 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t28ll\" (UniqueName: \"kubernetes.io/projected/900d6126-3c05-4fa2-9f32-f444ff2ed311-kube-api-access-t28ll\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.120185 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.129291 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.137122 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.146192 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.153087 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 07:03:46 crc kubenswrapper[4520]: E0130 07:03:46.153507 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900d6126-3c05-4fa2-9f32-f444ff2ed311" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.153543 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="900d6126-3c05-4fa2-9f32-f444ff2ed311" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 07:03:46 crc kubenswrapper[4520]: E0130 07:03:46.153576 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83" containerName="nova-metadata-log" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.153587 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83" containerName="nova-metadata-log" Jan 30 07:03:46 crc kubenswrapper[4520]: E0130 07:03:46.153600 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83" containerName="nova-metadata-metadata" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.153605 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83" containerName="nova-metadata-metadata" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.153771 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="900d6126-3c05-4fa2-9f32-f444ff2ed311" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.153795 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83" containerName="nova-metadata-metadata" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.153808 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83" containerName="nova-metadata-log" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.154451 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.158827 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.163581 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.163686 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.166113 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ac4ff34-ecba-47be-be6a-920d67d398dc-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ac4ff34-ecba-47be-be6a-920d67d398dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.166251 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ac4ff34-ecba-47be-be6a-920d67d398dc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ac4ff34-ecba-47be-be6a-920d67d398dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.166435 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ac4ff34-ecba-47be-be6a-920d67d398dc-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ac4ff34-ecba-47be-be6a-920d67d398dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.166530 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q94km\" (UniqueName: \"kubernetes.io/projected/7ac4ff34-ecba-47be-be6a-920d67d398dc-kube-api-access-q94km\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ac4ff34-ecba-47be-be6a-920d67d398dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.166700 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ac4ff34-ecba-47be-be6a-920d67d398dc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ac4ff34-ecba-47be-be6a-920d67d398dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.173153 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.175026 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.176612 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.176745 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.187451 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.208105 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.268932 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ac4ff34-ecba-47be-be6a-920d67d398dc-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ac4ff34-ecba-47be-be6a-920d67d398dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.268987 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q94km\" (UniqueName: \"kubernetes.io/projected/7ac4ff34-ecba-47be-be6a-920d67d398dc-kube-api-access-q94km\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ac4ff34-ecba-47be-be6a-920d67d398dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.269034 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ac4ff34-ecba-47be-be6a-920d67d398dc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ac4ff34-ecba-47be-be6a-920d67d398dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.269069 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31568e9f-fbbe-4d8e-859f-1eed8d87ce26-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"31568e9f-fbbe-4d8e-859f-1eed8d87ce26\") " pod="openstack/nova-metadata-0" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.269118 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4xxc\" (UniqueName: \"kubernetes.io/projected/31568e9f-fbbe-4d8e-859f-1eed8d87ce26-kube-api-access-c4xxc\") pod \"nova-metadata-0\" (UID: \"31568e9f-fbbe-4d8e-859f-1eed8d87ce26\") " pod="openstack/nova-metadata-0" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.269174 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31568e9f-fbbe-4d8e-859f-1eed8d87ce26-logs\") pod \"nova-metadata-0\" (UID: \"31568e9f-fbbe-4d8e-859f-1eed8d87ce26\") " pod="openstack/nova-metadata-0" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.269202 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ac4ff34-ecba-47be-be6a-920d67d398dc-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ac4ff34-ecba-47be-be6a-920d67d398dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.269254 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ac4ff34-ecba-47be-be6a-920d67d398dc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ac4ff34-ecba-47be-be6a-920d67d398dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.269304 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/31568e9f-fbbe-4d8e-859f-1eed8d87ce26-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"31568e9f-fbbe-4d8e-859f-1eed8d87ce26\") " pod="openstack/nova-metadata-0" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.269349 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31568e9f-fbbe-4d8e-859f-1eed8d87ce26-config-data\") pod \"nova-metadata-0\" (UID: \"31568e9f-fbbe-4d8e-859f-1eed8d87ce26\") " pod="openstack/nova-metadata-0" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.272905 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ac4ff34-ecba-47be-be6a-920d67d398dc-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ac4ff34-ecba-47be-be6a-920d67d398dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.273857 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ac4ff34-ecba-47be-be6a-920d67d398dc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ac4ff34-ecba-47be-be6a-920d67d398dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.274312 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ac4ff34-ecba-47be-be6a-920d67d398dc-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ac4ff34-ecba-47be-be6a-920d67d398dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.275154 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ac4ff34-ecba-47be-be6a-920d67d398dc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ac4ff34-ecba-47be-be6a-920d67d398dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.284265 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q94km\" (UniqueName: \"kubernetes.io/projected/7ac4ff34-ecba-47be-be6a-920d67d398dc-kube-api-access-q94km\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ac4ff34-ecba-47be-be6a-920d67d398dc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.371614 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31568e9f-fbbe-4d8e-859f-1eed8d87ce26-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"31568e9f-fbbe-4d8e-859f-1eed8d87ce26\") " pod="openstack/nova-metadata-0" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.371726 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4xxc\" (UniqueName: \"kubernetes.io/projected/31568e9f-fbbe-4d8e-859f-1eed8d87ce26-kube-api-access-c4xxc\") pod \"nova-metadata-0\" (UID: \"31568e9f-fbbe-4d8e-859f-1eed8d87ce26\") " pod="openstack/nova-metadata-0" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.372542 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31568e9f-fbbe-4d8e-859f-1eed8d87ce26-logs\") pod \"nova-metadata-0\" (UID: \"31568e9f-fbbe-4d8e-859f-1eed8d87ce26\") " pod="openstack/nova-metadata-0" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.372931 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31568e9f-fbbe-4d8e-859f-1eed8d87ce26-logs\") pod \"nova-metadata-0\" (UID: \"31568e9f-fbbe-4d8e-859f-1eed8d87ce26\") " pod="openstack/nova-metadata-0" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.373140 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/31568e9f-fbbe-4d8e-859f-1eed8d87ce26-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"31568e9f-fbbe-4d8e-859f-1eed8d87ce26\") " pod="openstack/nova-metadata-0" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.373172 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31568e9f-fbbe-4d8e-859f-1eed8d87ce26-config-data\") pod \"nova-metadata-0\" (UID: \"31568e9f-fbbe-4d8e-859f-1eed8d87ce26\") " pod="openstack/nova-metadata-0" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.375171 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31568e9f-fbbe-4d8e-859f-1eed8d87ce26-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"31568e9f-fbbe-4d8e-859f-1eed8d87ce26\") " pod="openstack/nova-metadata-0" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.376561 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31568e9f-fbbe-4d8e-859f-1eed8d87ce26-config-data\") pod \"nova-metadata-0\" (UID: \"31568e9f-fbbe-4d8e-859f-1eed8d87ce26\") " pod="openstack/nova-metadata-0" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.377495 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/31568e9f-fbbe-4d8e-859f-1eed8d87ce26-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"31568e9f-fbbe-4d8e-859f-1eed8d87ce26\") " pod="openstack/nova-metadata-0" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.386022 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4xxc\" (UniqueName: \"kubernetes.io/projected/31568e9f-fbbe-4d8e-859f-1eed8d87ce26-kube-api-access-c4xxc\") pod \"nova-metadata-0\" (UID: \"31568e9f-fbbe-4d8e-859f-1eed8d87ce26\") " pod="openstack/nova-metadata-0" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.469425 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.487262 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.702734 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83" path="/var/lib/kubelet/pods/7e5fcc25-dbdc-40b5-8c22-e4639fc1ac83/volumes" Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.704111 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="900d6126-3c05-4fa2-9f32-f444ff2ed311" path="/var/lib/kubelet/pods/900d6126-3c05-4fa2-9f32-f444ff2ed311/volumes" Jan 30 07:03:46 crc kubenswrapper[4520]: W0130 07:03:46.950716 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ac4ff34_ecba_47be_be6a_920d67d398dc.slice/crio-415f18d7c62afb76a2f8a0a462fd32c51742a2fe5f87df5fb9128d069eec7048 WatchSource:0}: Error finding container 415f18d7c62afb76a2f8a0a462fd32c51742a2fe5f87df5fb9128d069eec7048: Status 404 returned error can't find the container with id 415f18d7c62afb76a2f8a0a462fd32c51742a2fe5f87df5fb9128d069eec7048 Jan 30 07:03:46 crc kubenswrapper[4520]: I0130 07:03:46.957169 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 07:03:47 crc kubenswrapper[4520]: I0130 07:03:47.012345 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 07:03:47 crc kubenswrapper[4520]: W0130 07:03:47.020313 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31568e9f_fbbe_4d8e_859f_1eed8d87ce26.slice/crio-13e3f7d16fdeb0366ac943811557a2b1f689e0b56cb015dedc31d2689143022e WatchSource:0}: Error finding container 13e3f7d16fdeb0366ac943811557a2b1f689e0b56cb015dedc31d2689143022e: Status 404 returned error can't find the container with id 13e3f7d16fdeb0366ac943811557a2b1f689e0b56cb015dedc31d2689143022e Jan 30 07:03:47 crc kubenswrapper[4520]: I0130 07:03:47.817298 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"31568e9f-fbbe-4d8e-859f-1eed8d87ce26","Type":"ContainerStarted","Data":"109d34949533441435f90d33619b5a4e8c48405ad66883d7251e861db5d634c8"} Jan 30 07:03:47 crc kubenswrapper[4520]: I0130 07:03:47.818528 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"31568e9f-fbbe-4d8e-859f-1eed8d87ce26","Type":"ContainerStarted","Data":"6d97476e07c83927999a1442fe9e40225895fbd22b6eadea668804257fd9522d"} Jan 30 07:03:47 crc kubenswrapper[4520]: I0130 07:03:47.818546 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"31568e9f-fbbe-4d8e-859f-1eed8d87ce26","Type":"ContainerStarted","Data":"13e3f7d16fdeb0366ac943811557a2b1f689e0b56cb015dedc31d2689143022e"} Jan 30 07:03:47 crc kubenswrapper[4520]: I0130 07:03:47.822594 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7ac4ff34-ecba-47be-be6a-920d67d398dc","Type":"ContainerStarted","Data":"5fc9c3fbdea7a29102ef90989c2d36ea7c29915d30c52f59445a6cd52ab21a87"} Jan 30 07:03:47 crc kubenswrapper[4520]: I0130 07:03:47.822623 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7ac4ff34-ecba-47be-be6a-920d67d398dc","Type":"ContainerStarted","Data":"415f18d7c62afb76a2f8a0a462fd32c51742a2fe5f87df5fb9128d069eec7048"} Jan 30 07:03:47 crc kubenswrapper[4520]: I0130 07:03:47.841844 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.841823571 podStartE2EDuration="1.841823571s" podCreationTimestamp="2026-01-30 07:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:03:47.839793815 +0000 UTC m=+1141.468145996" watchObservedRunningTime="2026-01-30 07:03:47.841823571 +0000 UTC m=+1141.470175742" Jan 30 07:03:47 crc kubenswrapper[4520]: I0130 07:03:47.880928 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.880900867 podStartE2EDuration="1.880900867s" podCreationTimestamp="2026-01-30 07:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:03:47.860760529 +0000 UTC m=+1141.489112709" watchObservedRunningTime="2026-01-30 07:03:47.880900867 +0000 UTC m=+1141.509253049" Jan 30 07:03:48 crc kubenswrapper[4520]: I0130 07:03:48.284935 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 07:03:48 crc kubenswrapper[4520]: I0130 07:03:48.285914 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 07:03:48 crc kubenswrapper[4520]: I0130 07:03:48.289695 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 07:03:48 crc kubenswrapper[4520]: I0130 07:03:48.293013 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 07:03:48 crc kubenswrapper[4520]: I0130 07:03:48.842302 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 07:03:48 crc kubenswrapper[4520]: I0130 07:03:48.845931 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 07:03:49 crc kubenswrapper[4520]: I0130 07:03:49.043021 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f464775-8fv4z"] Jan 30 07:03:49 crc kubenswrapper[4520]: I0130 07:03:49.044508 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f464775-8fv4z" Jan 30 07:03:49 crc kubenswrapper[4520]: I0130 07:03:49.073267 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f464775-8fv4z"] Jan 30 07:03:49 crc kubenswrapper[4520]: I0130 07:03:49.146017 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c24cc31d-16b3-4859-a413-dbb766b276e2-dns-swift-storage-0\") pod \"dnsmasq-dns-5f464775-8fv4z\" (UID: \"c24cc31d-16b3-4859-a413-dbb766b276e2\") " pod="openstack/dnsmasq-dns-5f464775-8fv4z" Jan 30 07:03:49 crc kubenswrapper[4520]: I0130 07:03:49.146107 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c24cc31d-16b3-4859-a413-dbb766b276e2-config\") pod \"dnsmasq-dns-5f464775-8fv4z\" (UID: \"c24cc31d-16b3-4859-a413-dbb766b276e2\") " pod="openstack/dnsmasq-dns-5f464775-8fv4z" Jan 30 07:03:49 crc kubenswrapper[4520]: I0130 07:03:49.146219 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c24cc31d-16b3-4859-a413-dbb766b276e2-ovsdbserver-nb\") pod \"dnsmasq-dns-5f464775-8fv4z\" (UID: \"c24cc31d-16b3-4859-a413-dbb766b276e2\") " pod="openstack/dnsmasq-dns-5f464775-8fv4z" Jan 30 07:03:49 crc kubenswrapper[4520]: I0130 07:03:49.146260 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bc9d\" (UniqueName: \"kubernetes.io/projected/c24cc31d-16b3-4859-a413-dbb766b276e2-kube-api-access-7bc9d\") pod \"dnsmasq-dns-5f464775-8fv4z\" (UID: \"c24cc31d-16b3-4859-a413-dbb766b276e2\") " pod="openstack/dnsmasq-dns-5f464775-8fv4z" Jan 30 07:03:49 crc kubenswrapper[4520]: I0130 07:03:49.146291 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c24cc31d-16b3-4859-a413-dbb766b276e2-dns-svc\") pod \"dnsmasq-dns-5f464775-8fv4z\" (UID: \"c24cc31d-16b3-4859-a413-dbb766b276e2\") " pod="openstack/dnsmasq-dns-5f464775-8fv4z" Jan 30 07:03:49 crc kubenswrapper[4520]: I0130 07:03:49.146316 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c24cc31d-16b3-4859-a413-dbb766b276e2-ovsdbserver-sb\") pod \"dnsmasq-dns-5f464775-8fv4z\" (UID: \"c24cc31d-16b3-4859-a413-dbb766b276e2\") " pod="openstack/dnsmasq-dns-5f464775-8fv4z" Jan 30 07:03:49 crc kubenswrapper[4520]: I0130 07:03:49.248151 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c24cc31d-16b3-4859-a413-dbb766b276e2-ovsdbserver-nb\") pod \"dnsmasq-dns-5f464775-8fv4z\" (UID: \"c24cc31d-16b3-4859-a413-dbb766b276e2\") " pod="openstack/dnsmasq-dns-5f464775-8fv4z" Jan 30 07:03:49 crc kubenswrapper[4520]: I0130 07:03:49.248225 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bc9d\" (UniqueName: \"kubernetes.io/projected/c24cc31d-16b3-4859-a413-dbb766b276e2-kube-api-access-7bc9d\") pod \"dnsmasq-dns-5f464775-8fv4z\" (UID: \"c24cc31d-16b3-4859-a413-dbb766b276e2\") " pod="openstack/dnsmasq-dns-5f464775-8fv4z" Jan 30 07:03:49 crc kubenswrapper[4520]: I0130 07:03:49.248250 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c24cc31d-16b3-4859-a413-dbb766b276e2-dns-svc\") pod \"dnsmasq-dns-5f464775-8fv4z\" (UID: \"c24cc31d-16b3-4859-a413-dbb766b276e2\") " pod="openstack/dnsmasq-dns-5f464775-8fv4z" Jan 30 07:03:49 crc kubenswrapper[4520]: I0130 07:03:49.248275 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c24cc31d-16b3-4859-a413-dbb766b276e2-ovsdbserver-sb\") pod \"dnsmasq-dns-5f464775-8fv4z\" (UID: \"c24cc31d-16b3-4859-a413-dbb766b276e2\") " pod="openstack/dnsmasq-dns-5f464775-8fv4z" Jan 30 07:03:49 crc kubenswrapper[4520]: I0130 07:03:49.248436 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c24cc31d-16b3-4859-a413-dbb766b276e2-dns-swift-storage-0\") pod \"dnsmasq-dns-5f464775-8fv4z\" (UID: \"c24cc31d-16b3-4859-a413-dbb766b276e2\") " pod="openstack/dnsmasq-dns-5f464775-8fv4z" Jan 30 07:03:49 crc kubenswrapper[4520]: I0130 07:03:49.248555 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c24cc31d-16b3-4859-a413-dbb766b276e2-config\") pod \"dnsmasq-dns-5f464775-8fv4z\" (UID: \"c24cc31d-16b3-4859-a413-dbb766b276e2\") " pod="openstack/dnsmasq-dns-5f464775-8fv4z" Jan 30 07:03:49 crc kubenswrapper[4520]: I0130 07:03:49.249159 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c24cc31d-16b3-4859-a413-dbb766b276e2-ovsdbserver-nb\") pod \"dnsmasq-dns-5f464775-8fv4z\" (UID: \"c24cc31d-16b3-4859-a413-dbb766b276e2\") " pod="openstack/dnsmasq-dns-5f464775-8fv4z" Jan 30 07:03:49 crc kubenswrapper[4520]: I0130 07:03:49.249343 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c24cc31d-16b3-4859-a413-dbb766b276e2-dns-svc\") pod \"dnsmasq-dns-5f464775-8fv4z\" (UID: \"c24cc31d-16b3-4859-a413-dbb766b276e2\") " pod="openstack/dnsmasq-dns-5f464775-8fv4z" Jan 30 07:03:49 crc kubenswrapper[4520]: I0130 07:03:49.249374 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c24cc31d-16b3-4859-a413-dbb766b276e2-dns-swift-storage-0\") pod \"dnsmasq-dns-5f464775-8fv4z\" (UID: \"c24cc31d-16b3-4859-a413-dbb766b276e2\") " pod="openstack/dnsmasq-dns-5f464775-8fv4z" Jan 30 07:03:49 crc kubenswrapper[4520]: I0130 07:03:49.249591 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c24cc31d-16b3-4859-a413-dbb766b276e2-config\") pod \"dnsmasq-dns-5f464775-8fv4z\" (UID: \"c24cc31d-16b3-4859-a413-dbb766b276e2\") " pod="openstack/dnsmasq-dns-5f464775-8fv4z" Jan 30 07:03:49 crc kubenswrapper[4520]: I0130 07:03:49.249589 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c24cc31d-16b3-4859-a413-dbb766b276e2-ovsdbserver-sb\") pod \"dnsmasq-dns-5f464775-8fv4z\" (UID: \"c24cc31d-16b3-4859-a413-dbb766b276e2\") " pod="openstack/dnsmasq-dns-5f464775-8fv4z" Jan 30 07:03:49 crc kubenswrapper[4520]: I0130 07:03:49.278752 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bc9d\" (UniqueName: \"kubernetes.io/projected/c24cc31d-16b3-4859-a413-dbb766b276e2-kube-api-access-7bc9d\") pod \"dnsmasq-dns-5f464775-8fv4z\" (UID: \"c24cc31d-16b3-4859-a413-dbb766b276e2\") " pod="openstack/dnsmasq-dns-5f464775-8fv4z" Jan 30 07:03:49 crc kubenswrapper[4520]: I0130 07:03:49.379117 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f464775-8fv4z" Jan 30 07:03:49 crc kubenswrapper[4520]: I0130 07:03:49.721408 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f464775-8fv4z"] Jan 30 07:03:49 crc kubenswrapper[4520]: I0130 07:03:49.868602 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f464775-8fv4z" event={"ID":"c24cc31d-16b3-4859-a413-dbb766b276e2","Type":"ContainerStarted","Data":"7d30d23535ae5adb298979836d1919bdb9c5e6a73742ce111cbb4fdfdb4a9079"} Jan 30 07:03:50 crc kubenswrapper[4520]: I0130 07:03:50.880585 4520 generic.go:334] "Generic (PLEG): container finished" podID="c24cc31d-16b3-4859-a413-dbb766b276e2" containerID="c2ee6f3d71320e5fa50ba665a846f444bc686b479b70f89c1f00b6ba24547955" exitCode=0 Jan 30 07:03:50 crc kubenswrapper[4520]: I0130 07:03:50.884106 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f464775-8fv4z" event={"ID":"c24cc31d-16b3-4859-a413-dbb766b276e2","Type":"ContainerDied","Data":"c2ee6f3d71320e5fa50ba665a846f444bc686b479b70f89c1f00b6ba24547955"} Jan 30 07:03:51 crc kubenswrapper[4520]: I0130 07:03:51.470323 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 07:03:51 crc kubenswrapper[4520]: I0130 07:03:51.488594 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 07:03:51 crc kubenswrapper[4520]: I0130 07:03:51.488665 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 07:03:51 crc kubenswrapper[4520]: I0130 07:03:51.582712 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 07:03:51 crc kubenswrapper[4520]: I0130 07:03:51.726154 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:03:51 crc kubenswrapper[4520]: I0130 07:03:51.729130 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e14b67b4-bf87-4dad-8452-34b620d4c6aa" containerName="ceilometer-central-agent" containerID="cri-o://7af896e4245d7175c804636fcd49752a01fe554524169b224cd07af1b126ce37" gracePeriod=30 Jan 30 07:03:51 crc kubenswrapper[4520]: I0130 07:03:51.729242 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e14b67b4-bf87-4dad-8452-34b620d4c6aa" containerName="sg-core" containerID="cri-o://f33b81769c8111f0be20b3aee93ec470184d83d2ed6ad1a074eab7c8efcb3d48" gracePeriod=30 Jan 30 07:03:51 crc kubenswrapper[4520]: I0130 07:03:51.729344 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e14b67b4-bf87-4dad-8452-34b620d4c6aa" containerName="ceilometer-notification-agent" containerID="cri-o://811b4e64bff75eaf79189ad6784ae807ff62003de41f97b999c1a64fc293c7ee" gracePeriod=30 Jan 30 07:03:51 crc kubenswrapper[4520]: I0130 07:03:51.729448 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e14b67b4-bf87-4dad-8452-34b620d4c6aa" containerName="proxy-httpd" containerID="cri-o://23074eb10e41ca3a61d705ae1d2be076a68cc2bde5f5eb4638fc8558cda5d7dd" gracePeriod=30 Jan 30 07:03:51 crc kubenswrapper[4520]: I0130 07:03:51.749358 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="e14b67b4-bf87-4dad-8452-34b620d4c6aa" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.214:3000/\": EOF" Jan 30 07:03:51 crc kubenswrapper[4520]: I0130 07:03:51.901878 4520 generic.go:334] "Generic (PLEG): container finished" podID="e14b67b4-bf87-4dad-8452-34b620d4c6aa" containerID="f33b81769c8111f0be20b3aee93ec470184d83d2ed6ad1a074eab7c8efcb3d48" exitCode=2 Jan 30 07:03:51 crc kubenswrapper[4520]: I0130 07:03:51.901943 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e14b67b4-bf87-4dad-8452-34b620d4c6aa","Type":"ContainerDied","Data":"f33b81769c8111f0be20b3aee93ec470184d83d2ed6ad1a074eab7c8efcb3d48"} Jan 30 07:03:51 crc kubenswrapper[4520]: I0130 07:03:51.905700 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f464775-8fv4z" event={"ID":"c24cc31d-16b3-4859-a413-dbb766b276e2","Type":"ContainerStarted","Data":"2b197a0fe1610f4ce5f7ecfb357fa5389a403593122ba7d89f9ccb5a7776245b"} Jan 30 07:03:51 crc kubenswrapper[4520]: I0130 07:03:51.905872 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f464775-8fv4z" Jan 30 07:03:51 crc kubenswrapper[4520]: I0130 07:03:51.905861 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6ac0901c-1c9d-41c2-bbf6-88ee904873b2" containerName="nova-api-log" containerID="cri-o://7c237286d700562c54b3059885ec85659b3877f6fb0ca9f1aa5e8b499087f4df" gracePeriod=30 Jan 30 07:03:51 crc kubenswrapper[4520]: I0130 07:03:51.906079 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6ac0901c-1c9d-41c2-bbf6-88ee904873b2" containerName="nova-api-api" containerID="cri-o://c6aa667ba36db1ec85d0fac17d689aad4bcbe6c88d935e4fb0ab74c94d5e57b1" gracePeriod=30 Jan 30 07:03:51 crc kubenswrapper[4520]: I0130 07:03:51.963705 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f464775-8fv4z" podStartSLOduration=3.963686325 podStartE2EDuration="3.963686325s" podCreationTimestamp="2026-01-30 07:03:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:03:51.933538028 +0000 UTC m=+1145.561890208" watchObservedRunningTime="2026-01-30 07:03:51.963686325 +0000 UTC m=+1145.592038507" Jan 30 07:03:52 crc kubenswrapper[4520]: I0130 07:03:52.921015 4520 generic.go:334] "Generic (PLEG): container finished" podID="e14b67b4-bf87-4dad-8452-34b620d4c6aa" containerID="23074eb10e41ca3a61d705ae1d2be076a68cc2bde5f5eb4638fc8558cda5d7dd" exitCode=0 Jan 30 07:03:52 crc kubenswrapper[4520]: I0130 07:03:52.921417 4520 generic.go:334] "Generic (PLEG): container finished" podID="e14b67b4-bf87-4dad-8452-34b620d4c6aa" containerID="7af896e4245d7175c804636fcd49752a01fe554524169b224cd07af1b126ce37" exitCode=0 Jan 30 07:03:52 crc kubenswrapper[4520]: I0130 07:03:52.921143 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e14b67b4-bf87-4dad-8452-34b620d4c6aa","Type":"ContainerDied","Data":"23074eb10e41ca3a61d705ae1d2be076a68cc2bde5f5eb4638fc8558cda5d7dd"} Jan 30 07:03:52 crc kubenswrapper[4520]: I0130 07:03:52.921557 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e14b67b4-bf87-4dad-8452-34b620d4c6aa","Type":"ContainerDied","Data":"7af896e4245d7175c804636fcd49752a01fe554524169b224cd07af1b126ce37"} Jan 30 07:03:52 crc kubenswrapper[4520]: I0130 07:03:52.924107 4520 generic.go:334] "Generic (PLEG): container finished" podID="6ac0901c-1c9d-41c2-bbf6-88ee904873b2" containerID="7c237286d700562c54b3059885ec85659b3877f6fb0ca9f1aa5e8b499087f4df" exitCode=143 Jan 30 07:03:52 crc kubenswrapper[4520]: I0130 07:03:52.924195 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6ac0901c-1c9d-41c2-bbf6-88ee904873b2","Type":"ContainerDied","Data":"7c237286d700562c54b3059885ec85659b3877f6fb0ca9f1aa5e8b499087f4df"} Jan 30 07:03:55 crc kubenswrapper[4520]: I0130 07:03:55.548942 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 07:03:55 crc kubenswrapper[4520]: I0130 07:03:55.600455 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ac0901c-1c9d-41c2-bbf6-88ee904873b2-config-data\") pod \"6ac0901c-1c9d-41c2-bbf6-88ee904873b2\" (UID: \"6ac0901c-1c9d-41c2-bbf6-88ee904873b2\") " Jan 30 07:03:55 crc kubenswrapper[4520]: I0130 07:03:55.600562 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ac0901c-1c9d-41c2-bbf6-88ee904873b2-logs\") pod \"6ac0901c-1c9d-41c2-bbf6-88ee904873b2\" (UID: \"6ac0901c-1c9d-41c2-bbf6-88ee904873b2\") " Jan 30 07:03:55 crc kubenswrapper[4520]: I0130 07:03:55.600706 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ac0901c-1c9d-41c2-bbf6-88ee904873b2-combined-ca-bundle\") pod \"6ac0901c-1c9d-41c2-bbf6-88ee904873b2\" (UID: \"6ac0901c-1c9d-41c2-bbf6-88ee904873b2\") " Jan 30 07:03:55 crc kubenswrapper[4520]: I0130 07:03:55.600743 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sg5p\" (UniqueName: \"kubernetes.io/projected/6ac0901c-1c9d-41c2-bbf6-88ee904873b2-kube-api-access-9sg5p\") pod \"6ac0901c-1c9d-41c2-bbf6-88ee904873b2\" (UID: \"6ac0901c-1c9d-41c2-bbf6-88ee904873b2\") " Jan 30 07:03:55 crc kubenswrapper[4520]: I0130 07:03:55.601081 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ac0901c-1c9d-41c2-bbf6-88ee904873b2-logs" (OuterVolumeSpecName: "logs") pod "6ac0901c-1c9d-41c2-bbf6-88ee904873b2" (UID: "6ac0901c-1c9d-41c2-bbf6-88ee904873b2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:03:55 crc kubenswrapper[4520]: I0130 07:03:55.601286 4520 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ac0901c-1c9d-41c2-bbf6-88ee904873b2-logs\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:55 crc kubenswrapper[4520]: I0130 07:03:55.647568 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ac0901c-1c9d-41c2-bbf6-88ee904873b2-kube-api-access-9sg5p" (OuterVolumeSpecName: "kube-api-access-9sg5p") pod "6ac0901c-1c9d-41c2-bbf6-88ee904873b2" (UID: "6ac0901c-1c9d-41c2-bbf6-88ee904873b2"). InnerVolumeSpecName "kube-api-access-9sg5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:03:55 crc kubenswrapper[4520]: I0130 07:03:55.655497 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ac0901c-1c9d-41c2-bbf6-88ee904873b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ac0901c-1c9d-41c2-bbf6-88ee904873b2" (UID: "6ac0901c-1c9d-41c2-bbf6-88ee904873b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:03:55 crc kubenswrapper[4520]: I0130 07:03:55.703551 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ac0901c-1c9d-41c2-bbf6-88ee904873b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:55 crc kubenswrapper[4520]: I0130 07:03:55.703637 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sg5p\" (UniqueName: \"kubernetes.io/projected/6ac0901c-1c9d-41c2-bbf6-88ee904873b2-kube-api-access-9sg5p\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:55 crc kubenswrapper[4520]: I0130 07:03:55.714651 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ac0901c-1c9d-41c2-bbf6-88ee904873b2-config-data" (OuterVolumeSpecName: "config-data") pod "6ac0901c-1c9d-41c2-bbf6-88ee904873b2" (UID: "6ac0901c-1c9d-41c2-bbf6-88ee904873b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:03:55 crc kubenswrapper[4520]: I0130 07:03:55.806170 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ac0901c-1c9d-41c2-bbf6-88ee904873b2-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:55 crc kubenswrapper[4520]: I0130 07:03:55.954565 4520 generic.go:334] "Generic (PLEG): container finished" podID="6ac0901c-1c9d-41c2-bbf6-88ee904873b2" containerID="c6aa667ba36db1ec85d0fac17d689aad4bcbe6c88d935e4fb0ab74c94d5e57b1" exitCode=0 Jan 30 07:03:55 crc kubenswrapper[4520]: I0130 07:03:55.954893 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6ac0901c-1c9d-41c2-bbf6-88ee904873b2","Type":"ContainerDied","Data":"c6aa667ba36db1ec85d0fac17d689aad4bcbe6c88d935e4fb0ab74c94d5e57b1"} Jan 30 07:03:55 crc kubenswrapper[4520]: I0130 07:03:55.954926 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6ac0901c-1c9d-41c2-bbf6-88ee904873b2","Type":"ContainerDied","Data":"58f6bf2c7ecea2274f11de500ac582f3ba0fd490ec89df31c2e7b300d96594ab"} Jan 30 07:03:55 crc kubenswrapper[4520]: I0130 07:03:55.954945 4520 scope.go:117] "RemoveContainer" containerID="c6aa667ba36db1ec85d0fac17d689aad4bcbe6c88d935e4fb0ab74c94d5e57b1" Jan 30 07:03:55 crc kubenswrapper[4520]: I0130 07:03:55.954946 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 07:03:55 crc kubenswrapper[4520]: I0130 07:03:55.986065 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 07:03:55 crc kubenswrapper[4520]: I0130 07:03:55.987694 4520 scope.go:117] "RemoveContainer" containerID="7c237286d700562c54b3059885ec85659b3877f6fb0ca9f1aa5e8b499087f4df" Jan 30 07:03:55 crc kubenswrapper[4520]: I0130 07:03:55.993846 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.009417 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 07:03:56 crc kubenswrapper[4520]: E0130 07:03:56.009796 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ac0901c-1c9d-41c2-bbf6-88ee904873b2" containerName="nova-api-log" Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.009815 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ac0901c-1c9d-41c2-bbf6-88ee904873b2" containerName="nova-api-log" Jan 30 07:03:56 crc kubenswrapper[4520]: E0130 07:03:56.009855 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ac0901c-1c9d-41c2-bbf6-88ee904873b2" containerName="nova-api-api" Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.009861 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ac0901c-1c9d-41c2-bbf6-88ee904873b2" containerName="nova-api-api" Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.010034 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ac0901c-1c9d-41c2-bbf6-88ee904873b2" containerName="nova-api-api" Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.010049 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ac0901c-1c9d-41c2-bbf6-88ee904873b2" containerName="nova-api-log" Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.010924 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.016098 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.016659 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.016702 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.016915 4520 scope.go:117] "RemoveContainer" containerID="c6aa667ba36db1ec85d0fac17d689aad4bcbe6c88d935e4fb0ab74c94d5e57b1" Jan 30 07:03:56 crc kubenswrapper[4520]: E0130 07:03:56.017707 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6aa667ba36db1ec85d0fac17d689aad4bcbe6c88d935e4fb0ab74c94d5e57b1\": container with ID starting with c6aa667ba36db1ec85d0fac17d689aad4bcbe6c88d935e4fb0ab74c94d5e57b1 not found: ID does not exist" containerID="c6aa667ba36db1ec85d0fac17d689aad4bcbe6c88d935e4fb0ab74c94d5e57b1" Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.017738 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6aa667ba36db1ec85d0fac17d689aad4bcbe6c88d935e4fb0ab74c94d5e57b1"} err="failed to get container status \"c6aa667ba36db1ec85d0fac17d689aad4bcbe6c88d935e4fb0ab74c94d5e57b1\": rpc error: code = NotFound desc = could not find container \"c6aa667ba36db1ec85d0fac17d689aad4bcbe6c88d935e4fb0ab74c94d5e57b1\": container with ID starting with c6aa667ba36db1ec85d0fac17d689aad4bcbe6c88d935e4fb0ab74c94d5e57b1 not found: ID does not exist" Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.017762 4520 scope.go:117] "RemoveContainer" containerID="7c237286d700562c54b3059885ec85659b3877f6fb0ca9f1aa5e8b499087f4df" Jan 30 07:03:56 crc kubenswrapper[4520]: E0130 07:03:56.018878 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c237286d700562c54b3059885ec85659b3877f6fb0ca9f1aa5e8b499087f4df\": container with ID starting with 7c237286d700562c54b3059885ec85659b3877f6fb0ca9f1aa5e8b499087f4df not found: ID does not exist" containerID="7c237286d700562c54b3059885ec85659b3877f6fb0ca9f1aa5e8b499087f4df" Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.018915 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c237286d700562c54b3059885ec85659b3877f6fb0ca9f1aa5e8b499087f4df"} err="failed to get container status \"7c237286d700562c54b3059885ec85659b3877f6fb0ca9f1aa5e8b499087f4df\": rpc error: code = NotFound desc = could not find container \"7c237286d700562c54b3059885ec85659b3877f6fb0ca9f1aa5e8b499087f4df\": container with ID starting with 7c237286d700562c54b3059885ec85659b3877f6fb0ca9f1aa5e8b499087f4df not found: ID does not exist" Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.032306 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.120265 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6f89801-2e02-4941-b7fd-6c91b53d8823-public-tls-certs\") pod \"nova-api-0\" (UID: \"a6f89801-2e02-4941-b7fd-6c91b53d8823\") " pod="openstack/nova-api-0" Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.120349 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6f89801-2e02-4941-b7fd-6c91b53d8823-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a6f89801-2e02-4941-b7fd-6c91b53d8823\") " pod="openstack/nova-api-0" Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.120383 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgzj4\" (UniqueName: \"kubernetes.io/projected/a6f89801-2e02-4941-b7fd-6c91b53d8823-kube-api-access-xgzj4\") pod \"nova-api-0\" (UID: \"a6f89801-2e02-4941-b7fd-6c91b53d8823\") " pod="openstack/nova-api-0" Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.120755 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6f89801-2e02-4941-b7fd-6c91b53d8823-logs\") pod \"nova-api-0\" (UID: \"a6f89801-2e02-4941-b7fd-6c91b53d8823\") " pod="openstack/nova-api-0" Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.120967 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6f89801-2e02-4941-b7fd-6c91b53d8823-config-data\") pod \"nova-api-0\" (UID: \"a6f89801-2e02-4941-b7fd-6c91b53d8823\") " pod="openstack/nova-api-0" Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.121210 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6f89801-2e02-4941-b7fd-6c91b53d8823-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a6f89801-2e02-4941-b7fd-6c91b53d8823\") " pod="openstack/nova-api-0" Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.223681 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6f89801-2e02-4941-b7fd-6c91b53d8823-logs\") pod \"nova-api-0\" (UID: \"a6f89801-2e02-4941-b7fd-6c91b53d8823\") " pod="openstack/nova-api-0" Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.223740 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6f89801-2e02-4941-b7fd-6c91b53d8823-config-data\") pod \"nova-api-0\" (UID: \"a6f89801-2e02-4941-b7fd-6c91b53d8823\") " pod="openstack/nova-api-0" Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.223794 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6f89801-2e02-4941-b7fd-6c91b53d8823-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a6f89801-2e02-4941-b7fd-6c91b53d8823\") " pod="openstack/nova-api-0" Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.223895 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6f89801-2e02-4941-b7fd-6c91b53d8823-public-tls-certs\") pod \"nova-api-0\" (UID: \"a6f89801-2e02-4941-b7fd-6c91b53d8823\") " pod="openstack/nova-api-0" Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.223926 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6f89801-2e02-4941-b7fd-6c91b53d8823-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a6f89801-2e02-4941-b7fd-6c91b53d8823\") " pod="openstack/nova-api-0" Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.223951 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgzj4\" (UniqueName: \"kubernetes.io/projected/a6f89801-2e02-4941-b7fd-6c91b53d8823-kube-api-access-xgzj4\") pod \"nova-api-0\" (UID: \"a6f89801-2e02-4941-b7fd-6c91b53d8823\") " pod="openstack/nova-api-0" Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.224749 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6f89801-2e02-4941-b7fd-6c91b53d8823-logs\") pod \"nova-api-0\" (UID: \"a6f89801-2e02-4941-b7fd-6c91b53d8823\") " pod="openstack/nova-api-0" Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.229631 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6f89801-2e02-4941-b7fd-6c91b53d8823-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a6f89801-2e02-4941-b7fd-6c91b53d8823\") " pod="openstack/nova-api-0" Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.230332 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6f89801-2e02-4941-b7fd-6c91b53d8823-config-data\") pod \"nova-api-0\" (UID: \"a6f89801-2e02-4941-b7fd-6c91b53d8823\") " pod="openstack/nova-api-0" Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.233425 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6f89801-2e02-4941-b7fd-6c91b53d8823-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a6f89801-2e02-4941-b7fd-6c91b53d8823\") " pod="openstack/nova-api-0" Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.237421 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6f89801-2e02-4941-b7fd-6c91b53d8823-public-tls-certs\") pod \"nova-api-0\" (UID: \"a6f89801-2e02-4941-b7fd-6c91b53d8823\") " pod="openstack/nova-api-0" Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.239901 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgzj4\" (UniqueName: \"kubernetes.io/projected/a6f89801-2e02-4941-b7fd-6c91b53d8823-kube-api-access-xgzj4\") pod \"nova-api-0\" (UID: \"a6f89801-2e02-4941-b7fd-6c91b53d8823\") " pod="openstack/nova-api-0" Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.326399 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.470683 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.488744 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.488785 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.510933 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.696138 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ac0901c-1c9d-41c2-bbf6-88ee904873b2" path="/var/lib/kubelet/pods/6ac0901c-1c9d-41c2-bbf6-88ee904873b2/volumes" Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.778719 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.967305 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a6f89801-2e02-4941-b7fd-6c91b53d8823","Type":"ContainerStarted","Data":"2d3f28b21393e1dd962ae2619d7daf93a9f0da966da80af25a87008dff034cf2"} Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.967349 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a6f89801-2e02-4941-b7fd-6c91b53d8823","Type":"ContainerStarted","Data":"3176ecd493ec7d0d888d21365518303c7333dab72b572ce0b2f59ac9340d5d64"} Jan 30 07:03:56 crc kubenswrapper[4520]: I0130 07:03:56.988667 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.211052 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-m2hw8"] Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.212359 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-m2hw8" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.214145 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.215581 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.229814 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-m2hw8"] Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.259581 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed-scripts\") pod \"nova-cell1-cell-mapping-m2hw8\" (UID: \"f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed\") " pod="openstack/nova-cell1-cell-mapping-m2hw8" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.259655 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-m2hw8\" (UID: \"f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed\") " pod="openstack/nova-cell1-cell-mapping-m2hw8" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.259688 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed-config-data\") pod \"nova-cell1-cell-mapping-m2hw8\" (UID: \"f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed\") " pod="openstack/nova-cell1-cell-mapping-m2hw8" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.259715 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qhgs\" (UniqueName: \"kubernetes.io/projected/f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed-kube-api-access-4qhgs\") pod \"nova-cell1-cell-mapping-m2hw8\" (UID: \"f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed\") " pod="openstack/nova-cell1-cell-mapping-m2hw8" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.363180 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed-scripts\") pod \"nova-cell1-cell-mapping-m2hw8\" (UID: \"f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed\") " pod="openstack/nova-cell1-cell-mapping-m2hw8" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.363258 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-m2hw8\" (UID: \"f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed\") " pod="openstack/nova-cell1-cell-mapping-m2hw8" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.363291 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed-config-data\") pod \"nova-cell1-cell-mapping-m2hw8\" (UID: \"f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed\") " pod="openstack/nova-cell1-cell-mapping-m2hw8" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.363321 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qhgs\" (UniqueName: \"kubernetes.io/projected/f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed-kube-api-access-4qhgs\") pod \"nova-cell1-cell-mapping-m2hw8\" (UID: \"f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed\") " pod="openstack/nova-cell1-cell-mapping-m2hw8" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.390452 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qhgs\" (UniqueName: \"kubernetes.io/projected/f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed-kube-api-access-4qhgs\") pod \"nova-cell1-cell-mapping-m2hw8\" (UID: \"f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed\") " pod="openstack/nova-cell1-cell-mapping-m2hw8" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.394623 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed-scripts\") pod \"nova-cell1-cell-mapping-m2hw8\" (UID: \"f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed\") " pod="openstack/nova-cell1-cell-mapping-m2hw8" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.395115 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-m2hw8\" (UID: \"f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed\") " pod="openstack/nova-cell1-cell-mapping-m2hw8" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.398464 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed-config-data\") pod \"nova-cell1-cell-mapping-m2hw8\" (UID: \"f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed\") " pod="openstack/nova-cell1-cell-mapping-m2hw8" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.503691 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="31568e9f-fbbe-4d8e-859f-1eed8d87ce26" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.503703 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="31568e9f-fbbe-4d8e-859f-1eed8d87ce26" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.517148 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.578057 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-m2hw8" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.583344 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e14b67b4-bf87-4dad-8452-34b620d4c6aa-config-data\") pod \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\" (UID: \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\") " Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.583449 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e14b67b4-bf87-4dad-8452-34b620d4c6aa-ceilometer-tls-certs\") pod \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\" (UID: \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\") " Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.583775 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e14b67b4-bf87-4dad-8452-34b620d4c6aa-run-httpd\") pod \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\" (UID: \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\") " Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.583932 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e14b67b4-bf87-4dad-8452-34b620d4c6aa-log-httpd\") pod \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\" (UID: \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\") " Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.584009 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj5nl\" (UniqueName: \"kubernetes.io/projected/e14b67b4-bf87-4dad-8452-34b620d4c6aa-kube-api-access-wj5nl\") pod \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\" (UID: \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\") " Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.584095 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e14b67b4-bf87-4dad-8452-34b620d4c6aa-sg-core-conf-yaml\") pod \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\" (UID: \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\") " Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.584244 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e14b67b4-bf87-4dad-8452-34b620d4c6aa-scripts\") pod \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\" (UID: \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\") " Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.584271 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e14b67b4-bf87-4dad-8452-34b620d4c6aa-combined-ca-bundle\") pod \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\" (UID: \"e14b67b4-bf87-4dad-8452-34b620d4c6aa\") " Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.591876 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e14b67b4-bf87-4dad-8452-34b620d4c6aa-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e14b67b4-bf87-4dad-8452-34b620d4c6aa" (UID: "e14b67b4-bf87-4dad-8452-34b620d4c6aa"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.610870 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e14b67b4-bf87-4dad-8452-34b620d4c6aa-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e14b67b4-bf87-4dad-8452-34b620d4c6aa" (UID: "e14b67b4-bf87-4dad-8452-34b620d4c6aa"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.617659 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14b67b4-bf87-4dad-8452-34b620d4c6aa-scripts" (OuterVolumeSpecName: "scripts") pod "e14b67b4-bf87-4dad-8452-34b620d4c6aa" (UID: "e14b67b4-bf87-4dad-8452-34b620d4c6aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.668231 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e14b67b4-bf87-4dad-8452-34b620d4c6aa-kube-api-access-wj5nl" (OuterVolumeSpecName: "kube-api-access-wj5nl") pod "e14b67b4-bf87-4dad-8452-34b620d4c6aa" (UID: "e14b67b4-bf87-4dad-8452-34b620d4c6aa"). InnerVolumeSpecName "kube-api-access-wj5nl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.694228 4520 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e14b67b4-bf87-4dad-8452-34b620d4c6aa-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.694423 4520 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e14b67b4-bf87-4dad-8452-34b620d4c6aa-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.694433 4520 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e14b67b4-bf87-4dad-8452-34b620d4c6aa-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.694442 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wj5nl\" (UniqueName: \"kubernetes.io/projected/e14b67b4-bf87-4dad-8452-34b620d4c6aa-kube-api-access-wj5nl\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.696416 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14b67b4-bf87-4dad-8452-34b620d4c6aa-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e14b67b4-bf87-4dad-8452-34b620d4c6aa" (UID: "e14b67b4-bf87-4dad-8452-34b620d4c6aa"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.729477 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14b67b4-bf87-4dad-8452-34b620d4c6aa-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "e14b67b4-bf87-4dad-8452-34b620d4c6aa" (UID: "e14b67b4-bf87-4dad-8452-34b620d4c6aa"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.769658 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14b67b4-bf87-4dad-8452-34b620d4c6aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e14b67b4-bf87-4dad-8452-34b620d4c6aa" (UID: "e14b67b4-bf87-4dad-8452-34b620d4c6aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.785799 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14b67b4-bf87-4dad-8452-34b620d4c6aa-config-data" (OuterVolumeSpecName: "config-data") pod "e14b67b4-bf87-4dad-8452-34b620d4c6aa" (UID: "e14b67b4-bf87-4dad-8452-34b620d4c6aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.797240 4520 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e14b67b4-bf87-4dad-8452-34b620d4c6aa-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.797270 4520 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e14b67b4-bf87-4dad-8452-34b620d4c6aa-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.797279 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e14b67b4-bf87-4dad-8452-34b620d4c6aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.797290 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e14b67b4-bf87-4dad-8452-34b620d4c6aa-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.987101 4520 generic.go:334] "Generic (PLEG): container finished" podID="e14b67b4-bf87-4dad-8452-34b620d4c6aa" containerID="811b4e64bff75eaf79189ad6784ae807ff62003de41f97b999c1a64fc293c7ee" exitCode=0 Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.987250 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e14b67b4-bf87-4dad-8452-34b620d4c6aa","Type":"ContainerDied","Data":"811b4e64bff75eaf79189ad6784ae807ff62003de41f97b999c1a64fc293c7ee"} Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.987269 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.987296 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e14b67b4-bf87-4dad-8452-34b620d4c6aa","Type":"ContainerDied","Data":"7150c3d41a6bf4ed421f2b6ea86451b3fcdcfb5cf853912756b9363740def71f"} Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.987318 4520 scope.go:117] "RemoveContainer" containerID="23074eb10e41ca3a61d705ae1d2be076a68cc2bde5f5eb4638fc8558cda5d7dd" Jan 30 07:03:57 crc kubenswrapper[4520]: I0130 07:03:57.998795 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a6f89801-2e02-4941-b7fd-6c91b53d8823","Type":"ContainerStarted","Data":"fc9b074e43306d69f456a18be13dc6b4ebe95142cfb6a99b0648665bc5cc269c"} Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.021945 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.021929755 podStartE2EDuration="3.021929755s" podCreationTimestamp="2026-01-30 07:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:03:58.018092039 +0000 UTC m=+1151.646444219" watchObservedRunningTime="2026-01-30 07:03:58.021929755 +0000 UTC m=+1151.650281936" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.023926 4520 scope.go:117] "RemoveContainer" containerID="f33b81769c8111f0be20b3aee93ec470184d83d2ed6ad1a074eab7c8efcb3d48" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.047174 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.049741 4520 scope.go:117] "RemoveContainer" containerID="811b4e64bff75eaf79189ad6784ae807ff62003de41f97b999c1a64fc293c7ee" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.064229 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.076681 4520 scope.go:117] "RemoveContainer" containerID="7af896e4245d7175c804636fcd49752a01fe554524169b224cd07af1b126ce37" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.078503 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:03:58 crc kubenswrapper[4520]: E0130 07:03:58.078992 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e14b67b4-bf87-4dad-8452-34b620d4c6aa" containerName="ceilometer-notification-agent" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.079022 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="e14b67b4-bf87-4dad-8452-34b620d4c6aa" containerName="ceilometer-notification-agent" Jan 30 07:03:58 crc kubenswrapper[4520]: E0130 07:03:58.079039 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e14b67b4-bf87-4dad-8452-34b620d4c6aa" containerName="proxy-httpd" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.079046 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="e14b67b4-bf87-4dad-8452-34b620d4c6aa" containerName="proxy-httpd" Jan 30 07:03:58 crc kubenswrapper[4520]: E0130 07:03:58.079053 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e14b67b4-bf87-4dad-8452-34b620d4c6aa" containerName="sg-core" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.079060 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="e14b67b4-bf87-4dad-8452-34b620d4c6aa" containerName="sg-core" Jan 30 07:03:58 crc kubenswrapper[4520]: E0130 07:03:58.079097 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e14b67b4-bf87-4dad-8452-34b620d4c6aa" containerName="ceilometer-central-agent" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.079104 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="e14b67b4-bf87-4dad-8452-34b620d4c6aa" containerName="ceilometer-central-agent" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.079272 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="e14b67b4-bf87-4dad-8452-34b620d4c6aa" containerName="ceilometer-notification-agent" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.079289 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="e14b67b4-bf87-4dad-8452-34b620d4c6aa" containerName="ceilometer-central-agent" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.079300 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="e14b67b4-bf87-4dad-8452-34b620d4c6aa" containerName="sg-core" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.079308 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="e14b67b4-bf87-4dad-8452-34b620d4c6aa" containerName="proxy-httpd" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.081111 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.084774 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.086354 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.086785 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.087715 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.124611 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-m2hw8"] Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.129469 4520 scope.go:117] "RemoveContainer" containerID="23074eb10e41ca3a61d705ae1d2be076a68cc2bde5f5eb4638fc8558cda5d7dd" Jan 30 07:03:58 crc kubenswrapper[4520]: E0130 07:03:58.130409 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23074eb10e41ca3a61d705ae1d2be076a68cc2bde5f5eb4638fc8558cda5d7dd\": container with ID starting with 23074eb10e41ca3a61d705ae1d2be076a68cc2bde5f5eb4638fc8558cda5d7dd not found: ID does not exist" containerID="23074eb10e41ca3a61d705ae1d2be076a68cc2bde5f5eb4638fc8558cda5d7dd" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.130441 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23074eb10e41ca3a61d705ae1d2be076a68cc2bde5f5eb4638fc8558cda5d7dd"} err="failed to get container status \"23074eb10e41ca3a61d705ae1d2be076a68cc2bde5f5eb4638fc8558cda5d7dd\": rpc error: code = NotFound desc = could not find container \"23074eb10e41ca3a61d705ae1d2be076a68cc2bde5f5eb4638fc8558cda5d7dd\": container with ID starting with 23074eb10e41ca3a61d705ae1d2be076a68cc2bde5f5eb4638fc8558cda5d7dd not found: ID does not exist" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.130466 4520 scope.go:117] "RemoveContainer" containerID="f33b81769c8111f0be20b3aee93ec470184d83d2ed6ad1a074eab7c8efcb3d48" Jan 30 07:03:58 crc kubenswrapper[4520]: E0130 07:03:58.130815 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f33b81769c8111f0be20b3aee93ec470184d83d2ed6ad1a074eab7c8efcb3d48\": container with ID starting with f33b81769c8111f0be20b3aee93ec470184d83d2ed6ad1a074eab7c8efcb3d48 not found: ID does not exist" containerID="f33b81769c8111f0be20b3aee93ec470184d83d2ed6ad1a074eab7c8efcb3d48" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.130855 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f33b81769c8111f0be20b3aee93ec470184d83d2ed6ad1a074eab7c8efcb3d48"} err="failed to get container status \"f33b81769c8111f0be20b3aee93ec470184d83d2ed6ad1a074eab7c8efcb3d48\": rpc error: code = NotFound desc = could not find container \"f33b81769c8111f0be20b3aee93ec470184d83d2ed6ad1a074eab7c8efcb3d48\": container with ID starting with f33b81769c8111f0be20b3aee93ec470184d83d2ed6ad1a074eab7c8efcb3d48 not found: ID does not exist" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.130870 4520 scope.go:117] "RemoveContainer" containerID="811b4e64bff75eaf79189ad6784ae807ff62003de41f97b999c1a64fc293c7ee" Jan 30 07:03:58 crc kubenswrapper[4520]: E0130 07:03:58.131265 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"811b4e64bff75eaf79189ad6784ae807ff62003de41f97b999c1a64fc293c7ee\": container with ID starting with 811b4e64bff75eaf79189ad6784ae807ff62003de41f97b999c1a64fc293c7ee not found: ID does not exist" containerID="811b4e64bff75eaf79189ad6784ae807ff62003de41f97b999c1a64fc293c7ee" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.131287 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"811b4e64bff75eaf79189ad6784ae807ff62003de41f97b999c1a64fc293c7ee"} err="failed to get container status \"811b4e64bff75eaf79189ad6784ae807ff62003de41f97b999c1a64fc293c7ee\": rpc error: code = NotFound desc = could not find container \"811b4e64bff75eaf79189ad6784ae807ff62003de41f97b999c1a64fc293c7ee\": container with ID starting with 811b4e64bff75eaf79189ad6784ae807ff62003de41f97b999c1a64fc293c7ee not found: ID does not exist" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.131301 4520 scope.go:117] "RemoveContainer" containerID="7af896e4245d7175c804636fcd49752a01fe554524169b224cd07af1b126ce37" Jan 30 07:03:58 crc kubenswrapper[4520]: E0130 07:03:58.131501 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7af896e4245d7175c804636fcd49752a01fe554524169b224cd07af1b126ce37\": container with ID starting with 7af896e4245d7175c804636fcd49752a01fe554524169b224cd07af1b126ce37 not found: ID does not exist" containerID="7af896e4245d7175c804636fcd49752a01fe554524169b224cd07af1b126ce37" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.131530 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7af896e4245d7175c804636fcd49752a01fe554524169b224cd07af1b126ce37"} err="failed to get container status \"7af896e4245d7175c804636fcd49752a01fe554524169b224cd07af1b126ce37\": rpc error: code = NotFound desc = could not find container \"7af896e4245d7175c804636fcd49752a01fe554524169b224cd07af1b126ce37\": container with ID starting with 7af896e4245d7175c804636fcd49752a01fe554524169b224cd07af1b126ce37 not found: ID does not exist" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.208599 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df01147-3505-4e88-b91c-671e2149ab19-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9df01147-3505-4e88-b91c-671e2149ab19\") " pod="openstack/ceilometer-0" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.208805 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9df01147-3505-4e88-b91c-671e2149ab19-scripts\") pod \"ceilometer-0\" (UID: \"9df01147-3505-4e88-b91c-671e2149ab19\") " pod="openstack/ceilometer-0" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.208910 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9df01147-3505-4e88-b91c-671e2149ab19-run-httpd\") pod \"ceilometer-0\" (UID: \"9df01147-3505-4e88-b91c-671e2149ab19\") " pod="openstack/ceilometer-0" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.209003 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9df01147-3505-4e88-b91c-671e2149ab19-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9df01147-3505-4e88-b91c-671e2149ab19\") " pod="openstack/ceilometer-0" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.209105 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9df01147-3505-4e88-b91c-671e2149ab19-log-httpd\") pod \"ceilometer-0\" (UID: \"9df01147-3505-4e88-b91c-671e2149ab19\") " pod="openstack/ceilometer-0" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.209165 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9df01147-3505-4e88-b91c-671e2149ab19-config-data\") pod \"ceilometer-0\" (UID: \"9df01147-3505-4e88-b91c-671e2149ab19\") " pod="openstack/ceilometer-0" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.209233 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9df01147-3505-4e88-b91c-671e2149ab19-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9df01147-3505-4e88-b91c-671e2149ab19\") " pod="openstack/ceilometer-0" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.209293 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5499g\" (UniqueName: \"kubernetes.io/projected/9df01147-3505-4e88-b91c-671e2149ab19-kube-api-access-5499g\") pod \"ceilometer-0\" (UID: \"9df01147-3505-4e88-b91c-671e2149ab19\") " pod="openstack/ceilometer-0" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.311829 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9df01147-3505-4e88-b91c-671e2149ab19-run-httpd\") pod \"ceilometer-0\" (UID: \"9df01147-3505-4e88-b91c-671e2149ab19\") " pod="openstack/ceilometer-0" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.311893 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9df01147-3505-4e88-b91c-671e2149ab19-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9df01147-3505-4e88-b91c-671e2149ab19\") " pod="openstack/ceilometer-0" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.311935 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9df01147-3505-4e88-b91c-671e2149ab19-log-httpd\") pod \"ceilometer-0\" (UID: \"9df01147-3505-4e88-b91c-671e2149ab19\") " pod="openstack/ceilometer-0" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.311956 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9df01147-3505-4e88-b91c-671e2149ab19-config-data\") pod \"ceilometer-0\" (UID: \"9df01147-3505-4e88-b91c-671e2149ab19\") " pod="openstack/ceilometer-0" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.311995 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9df01147-3505-4e88-b91c-671e2149ab19-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9df01147-3505-4e88-b91c-671e2149ab19\") " pod="openstack/ceilometer-0" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.312017 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5499g\" (UniqueName: \"kubernetes.io/projected/9df01147-3505-4e88-b91c-671e2149ab19-kube-api-access-5499g\") pod \"ceilometer-0\" (UID: \"9df01147-3505-4e88-b91c-671e2149ab19\") " pod="openstack/ceilometer-0" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.312089 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df01147-3505-4e88-b91c-671e2149ab19-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9df01147-3505-4e88-b91c-671e2149ab19\") " pod="openstack/ceilometer-0" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.312208 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9df01147-3505-4e88-b91c-671e2149ab19-scripts\") pod \"ceilometer-0\" (UID: \"9df01147-3505-4e88-b91c-671e2149ab19\") " pod="openstack/ceilometer-0" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.312422 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9df01147-3505-4e88-b91c-671e2149ab19-run-httpd\") pod \"ceilometer-0\" (UID: \"9df01147-3505-4e88-b91c-671e2149ab19\") " pod="openstack/ceilometer-0" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.312501 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9df01147-3505-4e88-b91c-671e2149ab19-log-httpd\") pod \"ceilometer-0\" (UID: \"9df01147-3505-4e88-b91c-671e2149ab19\") " pod="openstack/ceilometer-0" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.316707 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9df01147-3505-4e88-b91c-671e2149ab19-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9df01147-3505-4e88-b91c-671e2149ab19\") " pod="openstack/ceilometer-0" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.317803 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9df01147-3505-4e88-b91c-671e2149ab19-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9df01147-3505-4e88-b91c-671e2149ab19\") " pod="openstack/ceilometer-0" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.318510 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df01147-3505-4e88-b91c-671e2149ab19-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9df01147-3505-4e88-b91c-671e2149ab19\") " pod="openstack/ceilometer-0" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.319297 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9df01147-3505-4e88-b91c-671e2149ab19-scripts\") pod \"ceilometer-0\" (UID: \"9df01147-3505-4e88-b91c-671e2149ab19\") " pod="openstack/ceilometer-0" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.320435 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9df01147-3505-4e88-b91c-671e2149ab19-config-data\") pod \"ceilometer-0\" (UID: \"9df01147-3505-4e88-b91c-671e2149ab19\") " pod="openstack/ceilometer-0" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.333577 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5499g\" (UniqueName: \"kubernetes.io/projected/9df01147-3505-4e88-b91c-671e2149ab19-kube-api-access-5499g\") pod \"ceilometer-0\" (UID: \"9df01147-3505-4e88-b91c-671e2149ab19\") " pod="openstack/ceilometer-0" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.420963 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.699991 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e14b67b4-bf87-4dad-8452-34b620d4c6aa" path="/var/lib/kubelet/pods/e14b67b4-bf87-4dad-8452-34b620d4c6aa/volumes" Jan 30 07:03:58 crc kubenswrapper[4520]: I0130 07:03:58.860588 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:03:59 crc kubenswrapper[4520]: I0130 07:03:59.023850 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9df01147-3505-4e88-b91c-671e2149ab19","Type":"ContainerStarted","Data":"8612fc75d144620d6dbc7f98e29e737628baffb079c8242f1a41171c0fb0285b"} Jan 30 07:03:59 crc kubenswrapper[4520]: I0130 07:03:59.025498 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-m2hw8" event={"ID":"f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed","Type":"ContainerStarted","Data":"ffb33993ee1ab0f2915731e0eb0b9235930223f9e5913bfaa196dcc2c36fd749"} Jan 30 07:03:59 crc kubenswrapper[4520]: I0130 07:03:59.025579 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-m2hw8" event={"ID":"f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed","Type":"ContainerStarted","Data":"e7e635384db569dd611f99ee94e01b1d79d2fe04dad06797ce6ceb3d3b11aad0"} Jan 30 07:03:59 crc kubenswrapper[4520]: I0130 07:03:59.381335 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f464775-8fv4z" Jan 30 07:03:59 crc kubenswrapper[4520]: I0130 07:03:59.413344 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-m2hw8" podStartSLOduration=2.413316286 podStartE2EDuration="2.413316286s" podCreationTimestamp="2026-01-30 07:03:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:03:59.064822409 +0000 UTC m=+1152.693174591" watchObservedRunningTime="2026-01-30 07:03:59.413316286 +0000 UTC m=+1153.041668467" Jan 30 07:03:59 crc kubenswrapper[4520]: I0130 07:03:59.472735 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69784c8cfc-c4j47"] Jan 30 07:03:59 crc kubenswrapper[4520]: I0130 07:03:59.472964 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-69784c8cfc-c4j47" podUID="70d249d0-c436-449d-a28a-f565dd87be43" containerName="dnsmasq-dns" containerID="cri-o://caa092df4d6638f79f968e02dc1cd43aaadd511155dd08e2313c9be640d93e45" gracePeriod=10 Jan 30 07:03:59 crc kubenswrapper[4520]: I0130 07:03:59.906544 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69784c8cfc-c4j47" Jan 30 07:03:59 crc kubenswrapper[4520]: I0130 07:03:59.957474 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znh27\" (UniqueName: \"kubernetes.io/projected/70d249d0-c436-449d-a28a-f565dd87be43-kube-api-access-znh27\") pod \"70d249d0-c436-449d-a28a-f565dd87be43\" (UID: \"70d249d0-c436-449d-a28a-f565dd87be43\") " Jan 30 07:03:59 crc kubenswrapper[4520]: I0130 07:03:59.957690 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70d249d0-c436-449d-a28a-f565dd87be43-config\") pod \"70d249d0-c436-449d-a28a-f565dd87be43\" (UID: \"70d249d0-c436-449d-a28a-f565dd87be43\") " Jan 30 07:03:59 crc kubenswrapper[4520]: I0130 07:03:59.957734 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70d249d0-c436-449d-a28a-f565dd87be43-dns-svc\") pod \"70d249d0-c436-449d-a28a-f565dd87be43\" (UID: \"70d249d0-c436-449d-a28a-f565dd87be43\") " Jan 30 07:03:59 crc kubenswrapper[4520]: I0130 07:03:59.957815 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/70d249d0-c436-449d-a28a-f565dd87be43-dns-swift-storage-0\") pod \"70d249d0-c436-449d-a28a-f565dd87be43\" (UID: \"70d249d0-c436-449d-a28a-f565dd87be43\") " Jan 30 07:03:59 crc kubenswrapper[4520]: I0130 07:03:59.957925 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70d249d0-c436-449d-a28a-f565dd87be43-ovsdbserver-nb\") pod \"70d249d0-c436-449d-a28a-f565dd87be43\" (UID: \"70d249d0-c436-449d-a28a-f565dd87be43\") " Jan 30 07:03:59 crc kubenswrapper[4520]: I0130 07:03:59.958004 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70d249d0-c436-449d-a28a-f565dd87be43-ovsdbserver-sb\") pod \"70d249d0-c436-449d-a28a-f565dd87be43\" (UID: \"70d249d0-c436-449d-a28a-f565dd87be43\") " Jan 30 07:03:59 crc kubenswrapper[4520]: I0130 07:03:59.975344 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70d249d0-c436-449d-a28a-f565dd87be43-kube-api-access-znh27" (OuterVolumeSpecName: "kube-api-access-znh27") pod "70d249d0-c436-449d-a28a-f565dd87be43" (UID: "70d249d0-c436-449d-a28a-f565dd87be43"). InnerVolumeSpecName "kube-api-access-znh27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:04:00 crc kubenswrapper[4520]: I0130 07:04:00.008709 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70d249d0-c436-449d-a28a-f565dd87be43-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "70d249d0-c436-449d-a28a-f565dd87be43" (UID: "70d249d0-c436-449d-a28a-f565dd87be43"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:04:00 crc kubenswrapper[4520]: I0130 07:04:00.027345 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70d249d0-c436-449d-a28a-f565dd87be43-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "70d249d0-c436-449d-a28a-f565dd87be43" (UID: "70d249d0-c436-449d-a28a-f565dd87be43"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:04:00 crc kubenswrapper[4520]: I0130 07:04:00.034614 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70d249d0-c436-449d-a28a-f565dd87be43-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "70d249d0-c436-449d-a28a-f565dd87be43" (UID: "70d249d0-c436-449d-a28a-f565dd87be43"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:04:00 crc kubenswrapper[4520]: I0130 07:04:00.034652 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9df01147-3505-4e88-b91c-671e2149ab19","Type":"ContainerStarted","Data":"0739919db0e42ab2d21e594a295adc079dbd11ac4f42597ed8b5b399d87d6ee4"} Jan 30 07:04:00 crc kubenswrapper[4520]: I0130 07:04:00.037673 4520 generic.go:334] "Generic (PLEG): container finished" podID="70d249d0-c436-449d-a28a-f565dd87be43" containerID="caa092df4d6638f79f968e02dc1cd43aaadd511155dd08e2313c9be640d93e45" exitCode=0 Jan 30 07:04:00 crc kubenswrapper[4520]: I0130 07:04:00.037946 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69784c8cfc-c4j47" event={"ID":"70d249d0-c436-449d-a28a-f565dd87be43","Type":"ContainerDied","Data":"caa092df4d6638f79f968e02dc1cd43aaadd511155dd08e2313c9be640d93e45"} Jan 30 07:04:00 crc kubenswrapper[4520]: I0130 07:04:00.038015 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69784c8cfc-c4j47" event={"ID":"70d249d0-c436-449d-a28a-f565dd87be43","Type":"ContainerDied","Data":"7a5ef1c879b0202e73d8c281d51a33d4d797569c6d2ac5c6d733c0ac873611a2"} Jan 30 07:04:00 crc kubenswrapper[4520]: I0130 07:04:00.038035 4520 scope.go:117] "RemoveContainer" containerID="caa092df4d6638f79f968e02dc1cd43aaadd511155dd08e2313c9be640d93e45" Jan 30 07:04:00 crc kubenswrapper[4520]: I0130 07:04:00.038054 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69784c8cfc-c4j47" Jan 30 07:04:00 crc kubenswrapper[4520]: I0130 07:04:00.045284 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70d249d0-c436-449d-a28a-f565dd87be43-config" (OuterVolumeSpecName: "config") pod "70d249d0-c436-449d-a28a-f565dd87be43" (UID: "70d249d0-c436-449d-a28a-f565dd87be43"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:04:00 crc kubenswrapper[4520]: I0130 07:04:00.059068 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70d249d0-c436-449d-a28a-f565dd87be43-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "70d249d0-c436-449d-a28a-f565dd87be43" (UID: "70d249d0-c436-449d-a28a-f565dd87be43"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:04:00 crc kubenswrapper[4520]: I0130 07:04:00.059232 4520 scope.go:117] "RemoveContainer" containerID="df001040ea12cb9259dc939f5e9bd858877d17e98e326b5ccd59cc37f3b1d824" Jan 30 07:04:00 crc kubenswrapper[4520]: I0130 07:04:00.059915 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70d249d0-c436-449d-a28a-f565dd87be43-ovsdbserver-nb\") pod \"70d249d0-c436-449d-a28a-f565dd87be43\" (UID: \"70d249d0-c436-449d-a28a-f565dd87be43\") " Jan 30 07:04:00 crc kubenswrapper[4520]: I0130 07:04:00.060591 4520 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70d249d0-c436-449d-a28a-f565dd87be43-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:00 crc kubenswrapper[4520]: I0130 07:04:00.060611 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znh27\" (UniqueName: \"kubernetes.io/projected/70d249d0-c436-449d-a28a-f565dd87be43-kube-api-access-znh27\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:00 crc kubenswrapper[4520]: I0130 07:04:00.060622 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70d249d0-c436-449d-a28a-f565dd87be43-config\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:00 crc kubenswrapper[4520]: I0130 07:04:00.060630 4520 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70d249d0-c436-449d-a28a-f565dd87be43-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:00 crc kubenswrapper[4520]: I0130 07:04:00.060648 4520 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/70d249d0-c436-449d-a28a-f565dd87be43-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:00 crc kubenswrapper[4520]: W0130 07:04:00.061333 4520 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/70d249d0-c436-449d-a28a-f565dd87be43/volumes/kubernetes.io~configmap/ovsdbserver-nb Jan 30 07:04:00 crc kubenswrapper[4520]: I0130 07:04:00.061370 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70d249d0-c436-449d-a28a-f565dd87be43-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "70d249d0-c436-449d-a28a-f565dd87be43" (UID: "70d249d0-c436-449d-a28a-f565dd87be43"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:04:00 crc kubenswrapper[4520]: I0130 07:04:00.081896 4520 scope.go:117] "RemoveContainer" containerID="caa092df4d6638f79f968e02dc1cd43aaadd511155dd08e2313c9be640d93e45" Jan 30 07:04:00 crc kubenswrapper[4520]: E0130 07:04:00.082344 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caa092df4d6638f79f968e02dc1cd43aaadd511155dd08e2313c9be640d93e45\": container with ID starting with caa092df4d6638f79f968e02dc1cd43aaadd511155dd08e2313c9be640d93e45 not found: ID does not exist" containerID="caa092df4d6638f79f968e02dc1cd43aaadd511155dd08e2313c9be640d93e45" Jan 30 07:04:00 crc kubenswrapper[4520]: I0130 07:04:00.082383 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caa092df4d6638f79f968e02dc1cd43aaadd511155dd08e2313c9be640d93e45"} err="failed to get container status \"caa092df4d6638f79f968e02dc1cd43aaadd511155dd08e2313c9be640d93e45\": rpc error: code = NotFound desc = could not find container \"caa092df4d6638f79f968e02dc1cd43aaadd511155dd08e2313c9be640d93e45\": container with ID starting with caa092df4d6638f79f968e02dc1cd43aaadd511155dd08e2313c9be640d93e45 not found: ID does not exist" Jan 30 07:04:00 crc kubenswrapper[4520]: I0130 07:04:00.082406 4520 scope.go:117] "RemoveContainer" containerID="df001040ea12cb9259dc939f5e9bd858877d17e98e326b5ccd59cc37f3b1d824" Jan 30 07:04:00 crc kubenswrapper[4520]: E0130 07:04:00.083162 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df001040ea12cb9259dc939f5e9bd858877d17e98e326b5ccd59cc37f3b1d824\": container with ID starting with df001040ea12cb9259dc939f5e9bd858877d17e98e326b5ccd59cc37f3b1d824 not found: ID does not exist" containerID="df001040ea12cb9259dc939f5e9bd858877d17e98e326b5ccd59cc37f3b1d824" Jan 30 07:04:00 crc kubenswrapper[4520]: I0130 07:04:00.083204 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df001040ea12cb9259dc939f5e9bd858877d17e98e326b5ccd59cc37f3b1d824"} err="failed to get container status \"df001040ea12cb9259dc939f5e9bd858877d17e98e326b5ccd59cc37f3b1d824\": rpc error: code = NotFound desc = could not find container \"df001040ea12cb9259dc939f5e9bd858877d17e98e326b5ccd59cc37f3b1d824\": container with ID starting with df001040ea12cb9259dc939f5e9bd858877d17e98e326b5ccd59cc37f3b1d824 not found: ID does not exist" Jan 30 07:04:00 crc kubenswrapper[4520]: I0130 07:04:00.162045 4520 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70d249d0-c436-449d-a28a-f565dd87be43-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:00 crc kubenswrapper[4520]: I0130 07:04:00.423619 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69784c8cfc-c4j47"] Jan 30 07:04:00 crc kubenswrapper[4520]: I0130 07:04:00.437214 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-69784c8cfc-c4j47"] Jan 30 07:04:00 crc kubenswrapper[4520]: I0130 07:04:00.695219 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70d249d0-c436-449d-a28a-f565dd87be43" path="/var/lib/kubelet/pods/70d249d0-c436-449d-a28a-f565dd87be43/volumes" Jan 30 07:04:01 crc kubenswrapper[4520]: I0130 07:04:01.046829 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9df01147-3505-4e88-b91c-671e2149ab19","Type":"ContainerStarted","Data":"5f65b0709cbc49f21ab500e35c601379fbeed5bf2d95a64736a3a046c3ffaf9c"} Jan 30 07:04:02 crc kubenswrapper[4520]: I0130 07:04:02.087195 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9df01147-3505-4e88-b91c-671e2149ab19","Type":"ContainerStarted","Data":"6a0eab6d2a46fa88f690d128a4a5ad7fe06e2be80d9292edbe570783e8d3a999"} Jan 30 07:04:04 crc kubenswrapper[4520]: I0130 07:04:04.110898 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9df01147-3505-4e88-b91c-671e2149ab19","Type":"ContainerStarted","Data":"dd73d25d370ca14503c1034bad7c9cd70882e221992943d2f672c1265130f65f"} Jan 30 07:04:04 crc kubenswrapper[4520]: I0130 07:04:04.111487 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 07:04:04 crc kubenswrapper[4520]: I0130 07:04:04.113105 4520 generic.go:334] "Generic (PLEG): container finished" podID="f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed" containerID="ffb33993ee1ab0f2915731e0eb0b9235930223f9e5913bfaa196dcc2c36fd749" exitCode=0 Jan 30 07:04:04 crc kubenswrapper[4520]: I0130 07:04:04.113144 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-m2hw8" event={"ID":"f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed","Type":"ContainerDied","Data":"ffb33993ee1ab0f2915731e0eb0b9235930223f9e5913bfaa196dcc2c36fd749"} Jan 30 07:04:04 crc kubenswrapper[4520]: I0130 07:04:04.132055 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.674645159 podStartE2EDuration="6.132031562s" podCreationTimestamp="2026-01-30 07:03:58 +0000 UTC" firstStartedPulling="2026-01-30 07:03:58.867361255 +0000 UTC m=+1152.495713437" lastFinishedPulling="2026-01-30 07:04:03.32474766 +0000 UTC m=+1156.953099840" observedRunningTime="2026-01-30 07:04:04.129895676 +0000 UTC m=+1157.758247857" watchObservedRunningTime="2026-01-30 07:04:04.132031562 +0000 UTC m=+1157.760383743" Jan 30 07:04:05 crc kubenswrapper[4520]: I0130 07:04:05.422403 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-m2hw8" Jan 30 07:04:05 crc kubenswrapper[4520]: I0130 07:04:05.497970 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed-config-data\") pod \"f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed\" (UID: \"f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed\") " Jan 30 07:04:05 crc kubenswrapper[4520]: I0130 07:04:05.498177 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed-combined-ca-bundle\") pod \"f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed\" (UID: \"f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed\") " Jan 30 07:04:05 crc kubenswrapper[4520]: I0130 07:04:05.498456 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed-scripts\") pod \"f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed\" (UID: \"f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed\") " Jan 30 07:04:05 crc kubenswrapper[4520]: I0130 07:04:05.498530 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qhgs\" (UniqueName: \"kubernetes.io/projected/f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed-kube-api-access-4qhgs\") pod \"f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed\" (UID: \"f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed\") " Jan 30 07:04:05 crc kubenswrapper[4520]: I0130 07:04:05.512136 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed-kube-api-access-4qhgs" (OuterVolumeSpecName: "kube-api-access-4qhgs") pod "f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed" (UID: "f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed"). InnerVolumeSpecName "kube-api-access-4qhgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:04:05 crc kubenswrapper[4520]: I0130 07:04:05.512203 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed-scripts" (OuterVolumeSpecName: "scripts") pod "f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed" (UID: "f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:04:05 crc kubenswrapper[4520]: I0130 07:04:05.527714 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed" (UID: "f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:04:05 crc kubenswrapper[4520]: I0130 07:04:05.532239 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed-config-data" (OuterVolumeSpecName: "config-data") pod "f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed" (UID: "f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:04:05 crc kubenswrapper[4520]: I0130 07:04:05.601968 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:05 crc kubenswrapper[4520]: I0130 07:04:05.602007 4520 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:05 crc kubenswrapper[4520]: I0130 07:04:05.602022 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qhgs\" (UniqueName: \"kubernetes.io/projected/f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed-kube-api-access-4qhgs\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:05 crc kubenswrapper[4520]: I0130 07:04:05.602035 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:06 crc kubenswrapper[4520]: I0130 07:04:06.136501 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-m2hw8" event={"ID":"f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed","Type":"ContainerDied","Data":"e7e635384db569dd611f99ee94e01b1d79d2fe04dad06797ce6ceb3d3b11aad0"} Jan 30 07:04:06 crc kubenswrapper[4520]: I0130 07:04:06.136582 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7e635384db569dd611f99ee94e01b1d79d2fe04dad06797ce6ceb3d3b11aad0" Jan 30 07:04:06 crc kubenswrapper[4520]: I0130 07:04:06.136671 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-m2hw8" Jan 30 07:04:06 crc kubenswrapper[4520]: I0130 07:04:06.327020 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 07:04:06 crc kubenswrapper[4520]: I0130 07:04:06.327099 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 07:04:06 crc kubenswrapper[4520]: I0130 07:04:06.373691 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 07:04:06 crc kubenswrapper[4520]: I0130 07:04:06.374891 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="e5e5f699-f08a-4fe0-9d66-f110745cab69" containerName="nova-scheduler-scheduler" containerID="cri-o://bae25460cad86dce479048d205233de7f05d7f2bc06f429561ee4e67dbfdf3d7" gracePeriod=30 Jan 30 07:04:06 crc kubenswrapper[4520]: I0130 07:04:06.400116 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 07:04:06 crc kubenswrapper[4520]: I0130 07:04:06.429734 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 07:04:06 crc kubenswrapper[4520]: I0130 07:04:06.430551 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="31568e9f-fbbe-4d8e-859f-1eed8d87ce26" containerName="nova-metadata-log" containerID="cri-o://6d97476e07c83927999a1442fe9e40225895fbd22b6eadea668804257fd9522d" gracePeriod=30 Jan 30 07:04:06 crc kubenswrapper[4520]: I0130 07:04:06.430624 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="31568e9f-fbbe-4d8e-859f-1eed8d87ce26" containerName="nova-metadata-metadata" containerID="cri-o://109d34949533441435f90d33619b5a4e8c48405ad66883d7251e861db5d634c8" gracePeriod=30 Jan 30 07:04:07 crc kubenswrapper[4520]: I0130 07:04:07.146884 4520 generic.go:334] "Generic (PLEG): container finished" podID="31568e9f-fbbe-4d8e-859f-1eed8d87ce26" containerID="6d97476e07c83927999a1442fe9e40225895fbd22b6eadea668804257fd9522d" exitCode=143 Jan 30 07:04:07 crc kubenswrapper[4520]: I0130 07:04:07.146970 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"31568e9f-fbbe-4d8e-859f-1eed8d87ce26","Type":"ContainerDied","Data":"6d97476e07c83927999a1442fe9e40225895fbd22b6eadea668804257fd9522d"} Jan 30 07:04:07 crc kubenswrapper[4520]: I0130 07:04:07.147397 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a6f89801-2e02-4941-b7fd-6c91b53d8823" containerName="nova-api-log" containerID="cri-o://2d3f28b21393e1dd962ae2619d7daf93a9f0da966da80af25a87008dff034cf2" gracePeriod=30 Jan 30 07:04:07 crc kubenswrapper[4520]: I0130 07:04:07.147534 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a6f89801-2e02-4941-b7fd-6c91b53d8823" containerName="nova-api-api" containerID="cri-o://fc9b074e43306d69f456a18be13dc6b4ebe95142cfb6a99b0648665bc5cc269c" gracePeriod=30 Jan 30 07:04:07 crc kubenswrapper[4520]: I0130 07:04:07.152505 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a6f89801-2e02-4941-b7fd-6c91b53d8823" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.218:8774/\": EOF" Jan 30 07:04:07 crc kubenswrapper[4520]: I0130 07:04:07.152720 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a6f89801-2e02-4941-b7fd-6c91b53d8823" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.218:8774/\": EOF" Jan 30 07:04:07 crc kubenswrapper[4520]: I0130 07:04:07.974506 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.064286 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5e5f699-f08a-4fe0-9d66-f110745cab69-config-data\") pod \"e5e5f699-f08a-4fe0-9d66-f110745cab69\" (UID: \"e5e5f699-f08a-4fe0-9d66-f110745cab69\") " Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.064720 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e5f699-f08a-4fe0-9d66-f110745cab69-combined-ca-bundle\") pod \"e5e5f699-f08a-4fe0-9d66-f110745cab69\" (UID: \"e5e5f699-f08a-4fe0-9d66-f110745cab69\") " Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.064918 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl6sp\" (UniqueName: \"kubernetes.io/projected/e5e5f699-f08a-4fe0-9d66-f110745cab69-kube-api-access-jl6sp\") pod \"e5e5f699-f08a-4fe0-9d66-f110745cab69\" (UID: \"e5e5f699-f08a-4fe0-9d66-f110745cab69\") " Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.077500 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5e5f699-f08a-4fe0-9d66-f110745cab69-kube-api-access-jl6sp" (OuterVolumeSpecName: "kube-api-access-jl6sp") pod "e5e5f699-f08a-4fe0-9d66-f110745cab69" (UID: "e5e5f699-f08a-4fe0-9d66-f110745cab69"). InnerVolumeSpecName "kube-api-access-jl6sp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.107219 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5e5f699-f08a-4fe0-9d66-f110745cab69-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5e5f699-f08a-4fe0-9d66-f110745cab69" (UID: "e5e5f699-f08a-4fe0-9d66-f110745cab69"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.161924 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5e5f699-f08a-4fe0-9d66-f110745cab69-config-data" (OuterVolumeSpecName: "config-data") pod "e5e5f699-f08a-4fe0-9d66-f110745cab69" (UID: "e5e5f699-f08a-4fe0-9d66-f110745cab69"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.167085 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e5f699-f08a-4fe0-9d66-f110745cab69-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.167114 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jl6sp\" (UniqueName: \"kubernetes.io/projected/e5e5f699-f08a-4fe0-9d66-f110745cab69-kube-api-access-jl6sp\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.167129 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5e5f699-f08a-4fe0-9d66-f110745cab69-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.167665 4520 generic.go:334] "Generic (PLEG): container finished" podID="a6f89801-2e02-4941-b7fd-6c91b53d8823" containerID="2d3f28b21393e1dd962ae2619d7daf93a9f0da966da80af25a87008dff034cf2" exitCode=143 Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.167738 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a6f89801-2e02-4941-b7fd-6c91b53d8823","Type":"ContainerDied","Data":"2d3f28b21393e1dd962ae2619d7daf93a9f0da966da80af25a87008dff034cf2"} Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.172591 4520 generic.go:334] "Generic (PLEG): container finished" podID="e5e5f699-f08a-4fe0-9d66-f110745cab69" containerID="bae25460cad86dce479048d205233de7f05d7f2bc06f429561ee4e67dbfdf3d7" exitCode=0 Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.172660 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e5e5f699-f08a-4fe0-9d66-f110745cab69","Type":"ContainerDied","Data":"bae25460cad86dce479048d205233de7f05d7f2bc06f429561ee4e67dbfdf3d7"} Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.172709 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e5e5f699-f08a-4fe0-9d66-f110745cab69","Type":"ContainerDied","Data":"2afa7bd3b397887472dc363312eb007365b130a0a79d2588a81261fc509ab0a7"} Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.172714 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.172732 4520 scope.go:117] "RemoveContainer" containerID="bae25460cad86dce479048d205233de7f05d7f2bc06f429561ee4e67dbfdf3d7" Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.225632 4520 scope.go:117] "RemoveContainer" containerID="bae25460cad86dce479048d205233de7f05d7f2bc06f429561ee4e67dbfdf3d7" Jan 30 07:04:08 crc kubenswrapper[4520]: E0130 07:04:08.227244 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bae25460cad86dce479048d205233de7f05d7f2bc06f429561ee4e67dbfdf3d7\": container with ID starting with bae25460cad86dce479048d205233de7f05d7f2bc06f429561ee4e67dbfdf3d7 not found: ID does not exist" containerID="bae25460cad86dce479048d205233de7f05d7f2bc06f429561ee4e67dbfdf3d7" Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.227274 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bae25460cad86dce479048d205233de7f05d7f2bc06f429561ee4e67dbfdf3d7"} err="failed to get container status \"bae25460cad86dce479048d205233de7f05d7f2bc06f429561ee4e67dbfdf3d7\": rpc error: code = NotFound desc = could not find container \"bae25460cad86dce479048d205233de7f05d7f2bc06f429561ee4e67dbfdf3d7\": container with ID starting with bae25460cad86dce479048d205233de7f05d7f2bc06f429561ee4e67dbfdf3d7 not found: ID does not exist" Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.233018 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.241829 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.255716 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 07:04:08 crc kubenswrapper[4520]: E0130 07:04:08.256207 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5e5f699-f08a-4fe0-9d66-f110745cab69" containerName="nova-scheduler-scheduler" Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.256227 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5e5f699-f08a-4fe0-9d66-f110745cab69" containerName="nova-scheduler-scheduler" Jan 30 07:04:08 crc kubenswrapper[4520]: E0130 07:04:08.256249 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed" containerName="nova-manage" Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.256257 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed" containerName="nova-manage" Jan 30 07:04:08 crc kubenswrapper[4520]: E0130 07:04:08.256270 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d249d0-c436-449d-a28a-f565dd87be43" containerName="dnsmasq-dns" Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.256275 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d249d0-c436-449d-a28a-f565dd87be43" containerName="dnsmasq-dns" Jan 30 07:04:08 crc kubenswrapper[4520]: E0130 07:04:08.256285 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d249d0-c436-449d-a28a-f565dd87be43" containerName="init" Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.256290 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d249d0-c436-449d-a28a-f565dd87be43" containerName="init" Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.256508 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5e5f699-f08a-4fe0-9d66-f110745cab69" containerName="nova-scheduler-scheduler" Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.256566 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed" containerName="nova-manage" Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.256589 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="70d249d0-c436-449d-a28a-f565dd87be43" containerName="dnsmasq-dns" Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.257348 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.260964 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.266261 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.372876 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c252799-8f7d-4146-aa98-4e5eaa5a65a6-config-data\") pod \"nova-scheduler-0\" (UID: \"0c252799-8f7d-4146-aa98-4e5eaa5a65a6\") " pod="openstack/nova-scheduler-0" Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.372961 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkrns\" (UniqueName: \"kubernetes.io/projected/0c252799-8f7d-4146-aa98-4e5eaa5a65a6-kube-api-access-pkrns\") pod \"nova-scheduler-0\" (UID: \"0c252799-8f7d-4146-aa98-4e5eaa5a65a6\") " pod="openstack/nova-scheduler-0" Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.373056 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c252799-8f7d-4146-aa98-4e5eaa5a65a6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0c252799-8f7d-4146-aa98-4e5eaa5a65a6\") " pod="openstack/nova-scheduler-0" Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.475282 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c252799-8f7d-4146-aa98-4e5eaa5a65a6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0c252799-8f7d-4146-aa98-4e5eaa5a65a6\") " pod="openstack/nova-scheduler-0" Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.475395 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c252799-8f7d-4146-aa98-4e5eaa5a65a6-config-data\") pod \"nova-scheduler-0\" (UID: \"0c252799-8f7d-4146-aa98-4e5eaa5a65a6\") " pod="openstack/nova-scheduler-0" Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.475455 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkrns\" (UniqueName: \"kubernetes.io/projected/0c252799-8f7d-4146-aa98-4e5eaa5a65a6-kube-api-access-pkrns\") pod \"nova-scheduler-0\" (UID: \"0c252799-8f7d-4146-aa98-4e5eaa5a65a6\") " pod="openstack/nova-scheduler-0" Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.483225 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c252799-8f7d-4146-aa98-4e5eaa5a65a6-config-data\") pod \"nova-scheduler-0\" (UID: \"0c252799-8f7d-4146-aa98-4e5eaa5a65a6\") " pod="openstack/nova-scheduler-0" Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.483312 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c252799-8f7d-4146-aa98-4e5eaa5a65a6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0c252799-8f7d-4146-aa98-4e5eaa5a65a6\") " pod="openstack/nova-scheduler-0" Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.494120 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkrns\" (UniqueName: \"kubernetes.io/projected/0c252799-8f7d-4146-aa98-4e5eaa5a65a6-kube-api-access-pkrns\") pod \"nova-scheduler-0\" (UID: \"0c252799-8f7d-4146-aa98-4e5eaa5a65a6\") " pod="openstack/nova-scheduler-0" Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.578862 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 07:04:08 crc kubenswrapper[4520]: I0130 07:04:08.707660 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5e5f699-f08a-4fe0-9d66-f110745cab69" path="/var/lib/kubelet/pods/e5e5f699-f08a-4fe0-9d66-f110745cab69/volumes" Jan 30 07:04:09 crc kubenswrapper[4520]: I0130 07:04:09.106371 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 07:04:09 crc kubenswrapper[4520]: I0130 07:04:09.184808 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0c252799-8f7d-4146-aa98-4e5eaa5a65a6","Type":"ContainerStarted","Data":"6320bbc76633bb495c8ca725ace77a142028215fb75f951eb9ede9570991268c"} Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.106551 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.202492 4520 generic.go:334] "Generic (PLEG): container finished" podID="31568e9f-fbbe-4d8e-859f-1eed8d87ce26" containerID="109d34949533441435f90d33619b5a4e8c48405ad66883d7251e861db5d634c8" exitCode=0 Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.202574 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"31568e9f-fbbe-4d8e-859f-1eed8d87ce26","Type":"ContainerDied","Data":"109d34949533441435f90d33619b5a4e8c48405ad66883d7251e861db5d634c8"} Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.202607 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"31568e9f-fbbe-4d8e-859f-1eed8d87ce26","Type":"ContainerDied","Data":"13e3f7d16fdeb0366ac943811557a2b1f689e0b56cb015dedc31d2689143022e"} Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.202628 4520 scope.go:117] "RemoveContainer" containerID="109d34949533441435f90d33619b5a4e8c48405ad66883d7251e861db5d634c8" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.202626 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.205837 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0c252799-8f7d-4146-aa98-4e5eaa5a65a6","Type":"ContainerStarted","Data":"59e79be81dcf62eac9fdfc80f4c7fc421b38b3c0db8d8bba36b95a4240b7cf6b"} Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.227500 4520 scope.go:117] "RemoveContainer" containerID="6d97476e07c83927999a1442fe9e40225895fbd22b6eadea668804257fd9522d" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.228181 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.228164274 podStartE2EDuration="2.228164274s" podCreationTimestamp="2026-01-30 07:04:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:04:10.219195842 +0000 UTC m=+1163.847548023" watchObservedRunningTime="2026-01-30 07:04:10.228164274 +0000 UTC m=+1163.856516455" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.232875 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/31568e9f-fbbe-4d8e-859f-1eed8d87ce26-nova-metadata-tls-certs\") pod \"31568e9f-fbbe-4d8e-859f-1eed8d87ce26\" (UID: \"31568e9f-fbbe-4d8e-859f-1eed8d87ce26\") " Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.232920 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31568e9f-fbbe-4d8e-859f-1eed8d87ce26-config-data\") pod \"31568e9f-fbbe-4d8e-859f-1eed8d87ce26\" (UID: \"31568e9f-fbbe-4d8e-859f-1eed8d87ce26\") " Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.232949 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31568e9f-fbbe-4d8e-859f-1eed8d87ce26-combined-ca-bundle\") pod \"31568e9f-fbbe-4d8e-859f-1eed8d87ce26\" (UID: \"31568e9f-fbbe-4d8e-859f-1eed8d87ce26\") " Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.233022 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31568e9f-fbbe-4d8e-859f-1eed8d87ce26-logs\") pod \"31568e9f-fbbe-4d8e-859f-1eed8d87ce26\" (UID: \"31568e9f-fbbe-4d8e-859f-1eed8d87ce26\") " Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.233255 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4xxc\" (UniqueName: \"kubernetes.io/projected/31568e9f-fbbe-4d8e-859f-1eed8d87ce26-kube-api-access-c4xxc\") pod \"31568e9f-fbbe-4d8e-859f-1eed8d87ce26\" (UID: \"31568e9f-fbbe-4d8e-859f-1eed8d87ce26\") " Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.233691 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31568e9f-fbbe-4d8e-859f-1eed8d87ce26-logs" (OuterVolumeSpecName: "logs") pod "31568e9f-fbbe-4d8e-859f-1eed8d87ce26" (UID: "31568e9f-fbbe-4d8e-859f-1eed8d87ce26"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.233891 4520 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31568e9f-fbbe-4d8e-859f-1eed8d87ce26-logs\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.238049 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31568e9f-fbbe-4d8e-859f-1eed8d87ce26-kube-api-access-c4xxc" (OuterVolumeSpecName: "kube-api-access-c4xxc") pod "31568e9f-fbbe-4d8e-859f-1eed8d87ce26" (UID: "31568e9f-fbbe-4d8e-859f-1eed8d87ce26"). InnerVolumeSpecName "kube-api-access-c4xxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.249414 4520 scope.go:117] "RemoveContainer" containerID="109d34949533441435f90d33619b5a4e8c48405ad66883d7251e861db5d634c8" Jan 30 07:04:10 crc kubenswrapper[4520]: E0130 07:04:10.249889 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"109d34949533441435f90d33619b5a4e8c48405ad66883d7251e861db5d634c8\": container with ID starting with 109d34949533441435f90d33619b5a4e8c48405ad66883d7251e861db5d634c8 not found: ID does not exist" containerID="109d34949533441435f90d33619b5a4e8c48405ad66883d7251e861db5d634c8" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.249925 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"109d34949533441435f90d33619b5a4e8c48405ad66883d7251e861db5d634c8"} err="failed to get container status \"109d34949533441435f90d33619b5a4e8c48405ad66883d7251e861db5d634c8\": rpc error: code = NotFound desc = could not find container \"109d34949533441435f90d33619b5a4e8c48405ad66883d7251e861db5d634c8\": container with ID starting with 109d34949533441435f90d33619b5a4e8c48405ad66883d7251e861db5d634c8 not found: ID does not exist" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.249948 4520 scope.go:117] "RemoveContainer" containerID="6d97476e07c83927999a1442fe9e40225895fbd22b6eadea668804257fd9522d" Jan 30 07:04:10 crc kubenswrapper[4520]: E0130 07:04:10.250237 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d97476e07c83927999a1442fe9e40225895fbd22b6eadea668804257fd9522d\": container with ID starting with 6d97476e07c83927999a1442fe9e40225895fbd22b6eadea668804257fd9522d not found: ID does not exist" containerID="6d97476e07c83927999a1442fe9e40225895fbd22b6eadea668804257fd9522d" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.250253 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d97476e07c83927999a1442fe9e40225895fbd22b6eadea668804257fd9522d"} err="failed to get container status \"6d97476e07c83927999a1442fe9e40225895fbd22b6eadea668804257fd9522d\": rpc error: code = NotFound desc = could not find container \"6d97476e07c83927999a1442fe9e40225895fbd22b6eadea668804257fd9522d\": container with ID starting with 6d97476e07c83927999a1442fe9e40225895fbd22b6eadea668804257fd9522d not found: ID does not exist" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.260780 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31568e9f-fbbe-4d8e-859f-1eed8d87ce26-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31568e9f-fbbe-4d8e-859f-1eed8d87ce26" (UID: "31568e9f-fbbe-4d8e-859f-1eed8d87ce26"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.262703 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31568e9f-fbbe-4d8e-859f-1eed8d87ce26-config-data" (OuterVolumeSpecName: "config-data") pod "31568e9f-fbbe-4d8e-859f-1eed8d87ce26" (UID: "31568e9f-fbbe-4d8e-859f-1eed8d87ce26"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.282048 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31568e9f-fbbe-4d8e-859f-1eed8d87ce26-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "31568e9f-fbbe-4d8e-859f-1eed8d87ce26" (UID: "31568e9f-fbbe-4d8e-859f-1eed8d87ce26"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.336621 4520 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/31568e9f-fbbe-4d8e-859f-1eed8d87ce26-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.336659 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31568e9f-fbbe-4d8e-859f-1eed8d87ce26-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.336672 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31568e9f-fbbe-4d8e-859f-1eed8d87ce26-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.336682 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4xxc\" (UniqueName: \"kubernetes.io/projected/31568e9f-fbbe-4d8e-859f-1eed8d87ce26-kube-api-access-c4xxc\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.529180 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.537419 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.551033 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 07:04:10 crc kubenswrapper[4520]: E0130 07:04:10.551374 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31568e9f-fbbe-4d8e-859f-1eed8d87ce26" containerName="nova-metadata-log" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.551393 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="31568e9f-fbbe-4d8e-859f-1eed8d87ce26" containerName="nova-metadata-log" Jan 30 07:04:10 crc kubenswrapper[4520]: E0130 07:04:10.551407 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31568e9f-fbbe-4d8e-859f-1eed8d87ce26" containerName="nova-metadata-metadata" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.551414 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="31568e9f-fbbe-4d8e-859f-1eed8d87ce26" containerName="nova-metadata-metadata" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.551588 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="31568e9f-fbbe-4d8e-859f-1eed8d87ce26" containerName="nova-metadata-log" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.551609 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="31568e9f-fbbe-4d8e-859f-1eed8d87ce26" containerName="nova-metadata-metadata" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.552467 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.554891 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.560709 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.561489 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.642963 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0754b75b-0d69-44f7-907a-2495795bdfd0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0754b75b-0d69-44f7-907a-2495795bdfd0\") " pod="openstack/nova-metadata-0" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.643034 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4c6t\" (UniqueName: \"kubernetes.io/projected/0754b75b-0d69-44f7-907a-2495795bdfd0-kube-api-access-m4c6t\") pod \"nova-metadata-0\" (UID: \"0754b75b-0d69-44f7-907a-2495795bdfd0\") " pod="openstack/nova-metadata-0" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.643078 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0754b75b-0d69-44f7-907a-2495795bdfd0-logs\") pod \"nova-metadata-0\" (UID: \"0754b75b-0d69-44f7-907a-2495795bdfd0\") " pod="openstack/nova-metadata-0" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.643129 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0754b75b-0d69-44f7-907a-2495795bdfd0-config-data\") pod \"nova-metadata-0\" (UID: \"0754b75b-0d69-44f7-907a-2495795bdfd0\") " pod="openstack/nova-metadata-0" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.643186 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0754b75b-0d69-44f7-907a-2495795bdfd0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0754b75b-0d69-44f7-907a-2495795bdfd0\") " pod="openstack/nova-metadata-0" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.696433 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31568e9f-fbbe-4d8e-859f-1eed8d87ce26" path="/var/lib/kubelet/pods/31568e9f-fbbe-4d8e-859f-1eed8d87ce26/volumes" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.744336 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0754b75b-0d69-44f7-907a-2495795bdfd0-logs\") pod \"nova-metadata-0\" (UID: \"0754b75b-0d69-44f7-907a-2495795bdfd0\") " pod="openstack/nova-metadata-0" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.744393 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0754b75b-0d69-44f7-907a-2495795bdfd0-config-data\") pod \"nova-metadata-0\" (UID: \"0754b75b-0d69-44f7-907a-2495795bdfd0\") " pod="openstack/nova-metadata-0" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.744453 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0754b75b-0d69-44f7-907a-2495795bdfd0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0754b75b-0d69-44f7-907a-2495795bdfd0\") " pod="openstack/nova-metadata-0" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.744507 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0754b75b-0d69-44f7-907a-2495795bdfd0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0754b75b-0d69-44f7-907a-2495795bdfd0\") " pod="openstack/nova-metadata-0" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.744561 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4c6t\" (UniqueName: \"kubernetes.io/projected/0754b75b-0d69-44f7-907a-2495795bdfd0-kube-api-access-m4c6t\") pod \"nova-metadata-0\" (UID: \"0754b75b-0d69-44f7-907a-2495795bdfd0\") " pod="openstack/nova-metadata-0" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.745275 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0754b75b-0d69-44f7-907a-2495795bdfd0-logs\") pod \"nova-metadata-0\" (UID: \"0754b75b-0d69-44f7-907a-2495795bdfd0\") " pod="openstack/nova-metadata-0" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.749038 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0754b75b-0d69-44f7-907a-2495795bdfd0-config-data\") pod \"nova-metadata-0\" (UID: \"0754b75b-0d69-44f7-907a-2495795bdfd0\") " pod="openstack/nova-metadata-0" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.755311 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0754b75b-0d69-44f7-907a-2495795bdfd0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0754b75b-0d69-44f7-907a-2495795bdfd0\") " pod="openstack/nova-metadata-0" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.758763 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0754b75b-0d69-44f7-907a-2495795bdfd0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0754b75b-0d69-44f7-907a-2495795bdfd0\") " pod="openstack/nova-metadata-0" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.763240 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4c6t\" (UniqueName: \"kubernetes.io/projected/0754b75b-0d69-44f7-907a-2495795bdfd0-kube-api-access-m4c6t\") pod \"nova-metadata-0\" (UID: \"0754b75b-0d69-44f7-907a-2495795bdfd0\") " pod="openstack/nova-metadata-0" Jan 30 07:04:10 crc kubenswrapper[4520]: I0130 07:04:10.870276 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 07:04:11 crc kubenswrapper[4520]: I0130 07:04:11.291735 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 07:04:11 crc kubenswrapper[4520]: W0130 07:04:11.294660 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0754b75b_0d69_44f7_907a_2495795bdfd0.slice/crio-0181b91237a7593442bbbb55ff2aaf718b22164bda5a692d7ae14f7a9240a8c0 WatchSource:0}: Error finding container 0181b91237a7593442bbbb55ff2aaf718b22164bda5a692d7ae14f7a9240a8c0: Status 404 returned error can't find the container with id 0181b91237a7593442bbbb55ff2aaf718b22164bda5a692d7ae14f7a9240a8c0 Jan 30 07:04:12 crc kubenswrapper[4520]: I0130 07:04:12.228629 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0754b75b-0d69-44f7-907a-2495795bdfd0","Type":"ContainerStarted","Data":"4ff9f573a1003d4a98790c7507a1dc71c9dfe913e74db3cecfcd1961b98fb9cb"} Jan 30 07:04:12 crc kubenswrapper[4520]: I0130 07:04:12.229057 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0754b75b-0d69-44f7-907a-2495795bdfd0","Type":"ContainerStarted","Data":"c308702e9c4b4743035dc99dff86897bee17612d280a7fa0fc0dcddca664c85d"} Jan 30 07:04:12 crc kubenswrapper[4520]: I0130 07:04:12.229069 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0754b75b-0d69-44f7-907a-2495795bdfd0","Type":"ContainerStarted","Data":"0181b91237a7593442bbbb55ff2aaf718b22164bda5a692d7ae14f7a9240a8c0"} Jan 30 07:04:12 crc kubenswrapper[4520]: I0130 07:04:12.248459 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.24843614 podStartE2EDuration="2.24843614s" podCreationTimestamp="2026-01-30 07:04:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:04:12.245294182 +0000 UTC m=+1165.873646363" watchObservedRunningTime="2026-01-30 07:04:12.24843614 +0000 UTC m=+1165.876788322" Jan 30 07:04:13 crc kubenswrapper[4520]: I0130 07:04:13.579250 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 07:04:13 crc kubenswrapper[4520]: I0130 07:04:13.925026 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.023390 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6f89801-2e02-4941-b7fd-6c91b53d8823-config-data\") pod \"a6f89801-2e02-4941-b7fd-6c91b53d8823\" (UID: \"a6f89801-2e02-4941-b7fd-6c91b53d8823\") " Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.023508 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6f89801-2e02-4941-b7fd-6c91b53d8823-internal-tls-certs\") pod \"a6f89801-2e02-4941-b7fd-6c91b53d8823\" (UID: \"a6f89801-2e02-4941-b7fd-6c91b53d8823\") " Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.023570 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6f89801-2e02-4941-b7fd-6c91b53d8823-combined-ca-bundle\") pod \"a6f89801-2e02-4941-b7fd-6c91b53d8823\" (UID: \"a6f89801-2e02-4941-b7fd-6c91b53d8823\") " Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.023615 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6f89801-2e02-4941-b7fd-6c91b53d8823-logs\") pod \"a6f89801-2e02-4941-b7fd-6c91b53d8823\" (UID: \"a6f89801-2e02-4941-b7fd-6c91b53d8823\") " Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.023714 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgzj4\" (UniqueName: \"kubernetes.io/projected/a6f89801-2e02-4941-b7fd-6c91b53d8823-kube-api-access-xgzj4\") pod \"a6f89801-2e02-4941-b7fd-6c91b53d8823\" (UID: \"a6f89801-2e02-4941-b7fd-6c91b53d8823\") " Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.023889 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6f89801-2e02-4941-b7fd-6c91b53d8823-public-tls-certs\") pod \"a6f89801-2e02-4941-b7fd-6c91b53d8823\" (UID: \"a6f89801-2e02-4941-b7fd-6c91b53d8823\") " Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.024172 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6f89801-2e02-4941-b7fd-6c91b53d8823-logs" (OuterVolumeSpecName: "logs") pod "a6f89801-2e02-4941-b7fd-6c91b53d8823" (UID: "a6f89801-2e02-4941-b7fd-6c91b53d8823"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.024575 4520 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6f89801-2e02-4941-b7fd-6c91b53d8823-logs\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.037062 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6f89801-2e02-4941-b7fd-6c91b53d8823-kube-api-access-xgzj4" (OuterVolumeSpecName: "kube-api-access-xgzj4") pod "a6f89801-2e02-4941-b7fd-6c91b53d8823" (UID: "a6f89801-2e02-4941-b7fd-6c91b53d8823"). InnerVolumeSpecName "kube-api-access-xgzj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.050334 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6f89801-2e02-4941-b7fd-6c91b53d8823-config-data" (OuterVolumeSpecName: "config-data") pod "a6f89801-2e02-4941-b7fd-6c91b53d8823" (UID: "a6f89801-2e02-4941-b7fd-6c91b53d8823"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.055735 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6f89801-2e02-4941-b7fd-6c91b53d8823-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6f89801-2e02-4941-b7fd-6c91b53d8823" (UID: "a6f89801-2e02-4941-b7fd-6c91b53d8823"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.069856 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6f89801-2e02-4941-b7fd-6c91b53d8823-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a6f89801-2e02-4941-b7fd-6c91b53d8823" (UID: "a6f89801-2e02-4941-b7fd-6c91b53d8823"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.079480 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6f89801-2e02-4941-b7fd-6c91b53d8823-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a6f89801-2e02-4941-b7fd-6c91b53d8823" (UID: "a6f89801-2e02-4941-b7fd-6c91b53d8823"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.127323 4520 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6f89801-2e02-4941-b7fd-6c91b53d8823-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.127364 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6f89801-2e02-4941-b7fd-6c91b53d8823-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.127380 4520 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6f89801-2e02-4941-b7fd-6c91b53d8823-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.127391 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6f89801-2e02-4941-b7fd-6c91b53d8823-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.127403 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgzj4\" (UniqueName: \"kubernetes.io/projected/a6f89801-2e02-4941-b7fd-6c91b53d8823-kube-api-access-xgzj4\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.252093 4520 generic.go:334] "Generic (PLEG): container finished" podID="a6f89801-2e02-4941-b7fd-6c91b53d8823" containerID="fc9b074e43306d69f456a18be13dc6b4ebe95142cfb6a99b0648665bc5cc269c" exitCode=0 Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.252209 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a6f89801-2e02-4941-b7fd-6c91b53d8823","Type":"ContainerDied","Data":"fc9b074e43306d69f456a18be13dc6b4ebe95142cfb6a99b0648665bc5cc269c"} Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.252351 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a6f89801-2e02-4941-b7fd-6c91b53d8823","Type":"ContainerDied","Data":"3176ecd493ec7d0d888d21365518303c7333dab72b572ce0b2f59ac9340d5d64"} Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.252385 4520 scope.go:117] "RemoveContainer" containerID="fc9b074e43306d69f456a18be13dc6b4ebe95142cfb6a99b0648665bc5cc269c" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.252242 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.278351 4520 scope.go:117] "RemoveContainer" containerID="2d3f28b21393e1dd962ae2619d7daf93a9f0da966da80af25a87008dff034cf2" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.299240 4520 scope.go:117] "RemoveContainer" containerID="fc9b074e43306d69f456a18be13dc6b4ebe95142cfb6a99b0648665bc5cc269c" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.299363 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 07:04:14 crc kubenswrapper[4520]: E0130 07:04:14.300955 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc9b074e43306d69f456a18be13dc6b4ebe95142cfb6a99b0648665bc5cc269c\": container with ID starting with fc9b074e43306d69f456a18be13dc6b4ebe95142cfb6a99b0648665bc5cc269c not found: ID does not exist" containerID="fc9b074e43306d69f456a18be13dc6b4ebe95142cfb6a99b0648665bc5cc269c" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.300986 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc9b074e43306d69f456a18be13dc6b4ebe95142cfb6a99b0648665bc5cc269c"} err="failed to get container status \"fc9b074e43306d69f456a18be13dc6b4ebe95142cfb6a99b0648665bc5cc269c\": rpc error: code = NotFound desc = could not find container \"fc9b074e43306d69f456a18be13dc6b4ebe95142cfb6a99b0648665bc5cc269c\": container with ID starting with fc9b074e43306d69f456a18be13dc6b4ebe95142cfb6a99b0648665bc5cc269c not found: ID does not exist" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.301010 4520 scope.go:117] "RemoveContainer" containerID="2d3f28b21393e1dd962ae2619d7daf93a9f0da966da80af25a87008dff034cf2" Jan 30 07:04:14 crc kubenswrapper[4520]: E0130 07:04:14.301456 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d3f28b21393e1dd962ae2619d7daf93a9f0da966da80af25a87008dff034cf2\": container with ID starting with 2d3f28b21393e1dd962ae2619d7daf93a9f0da966da80af25a87008dff034cf2 not found: ID does not exist" containerID="2d3f28b21393e1dd962ae2619d7daf93a9f0da966da80af25a87008dff034cf2" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.301500 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d3f28b21393e1dd962ae2619d7daf93a9f0da966da80af25a87008dff034cf2"} err="failed to get container status \"2d3f28b21393e1dd962ae2619d7daf93a9f0da966da80af25a87008dff034cf2\": rpc error: code = NotFound desc = could not find container \"2d3f28b21393e1dd962ae2619d7daf93a9f0da966da80af25a87008dff034cf2\": container with ID starting with 2d3f28b21393e1dd962ae2619d7daf93a9f0da966da80af25a87008dff034cf2 not found: ID does not exist" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.318451 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.328267 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 07:04:14 crc kubenswrapper[4520]: E0130 07:04:14.328806 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6f89801-2e02-4941-b7fd-6c91b53d8823" containerName="nova-api-api" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.328829 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6f89801-2e02-4941-b7fd-6c91b53d8823" containerName="nova-api-api" Jan 30 07:04:14 crc kubenswrapper[4520]: E0130 07:04:14.328872 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6f89801-2e02-4941-b7fd-6c91b53d8823" containerName="nova-api-log" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.328880 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6f89801-2e02-4941-b7fd-6c91b53d8823" containerName="nova-api-log" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.329062 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6f89801-2e02-4941-b7fd-6c91b53d8823" containerName="nova-api-log" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.329094 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6f89801-2e02-4941-b7fd-6c91b53d8823" containerName="nova-api-api" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.330208 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.332724 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.333026 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.333110 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.347296 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.434909 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4981c3f7-b5ce-4908-b025-7ed0ff38398a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4981c3f7-b5ce-4908-b025-7ed0ff38398a\") " pod="openstack/nova-api-0" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.434962 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j7hm\" (UniqueName: \"kubernetes.io/projected/4981c3f7-b5ce-4908-b025-7ed0ff38398a-kube-api-access-5j7hm\") pod \"nova-api-0\" (UID: \"4981c3f7-b5ce-4908-b025-7ed0ff38398a\") " pod="openstack/nova-api-0" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.435023 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4981c3f7-b5ce-4908-b025-7ed0ff38398a-config-data\") pod \"nova-api-0\" (UID: \"4981c3f7-b5ce-4908-b025-7ed0ff38398a\") " pod="openstack/nova-api-0" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.435107 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4981c3f7-b5ce-4908-b025-7ed0ff38398a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4981c3f7-b5ce-4908-b025-7ed0ff38398a\") " pod="openstack/nova-api-0" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.435321 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4981c3f7-b5ce-4908-b025-7ed0ff38398a-logs\") pod \"nova-api-0\" (UID: \"4981c3f7-b5ce-4908-b025-7ed0ff38398a\") " pod="openstack/nova-api-0" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.435353 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4981c3f7-b5ce-4908-b025-7ed0ff38398a-public-tls-certs\") pod \"nova-api-0\" (UID: \"4981c3f7-b5ce-4908-b025-7ed0ff38398a\") " pod="openstack/nova-api-0" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.538247 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4981c3f7-b5ce-4908-b025-7ed0ff38398a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4981c3f7-b5ce-4908-b025-7ed0ff38398a\") " pod="openstack/nova-api-0" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.538297 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j7hm\" (UniqueName: \"kubernetes.io/projected/4981c3f7-b5ce-4908-b025-7ed0ff38398a-kube-api-access-5j7hm\") pod \"nova-api-0\" (UID: \"4981c3f7-b5ce-4908-b025-7ed0ff38398a\") " pod="openstack/nova-api-0" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.538337 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4981c3f7-b5ce-4908-b025-7ed0ff38398a-config-data\") pod \"nova-api-0\" (UID: \"4981c3f7-b5ce-4908-b025-7ed0ff38398a\") " pod="openstack/nova-api-0" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.538388 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4981c3f7-b5ce-4908-b025-7ed0ff38398a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4981c3f7-b5ce-4908-b025-7ed0ff38398a\") " pod="openstack/nova-api-0" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.538466 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4981c3f7-b5ce-4908-b025-7ed0ff38398a-logs\") pod \"nova-api-0\" (UID: \"4981c3f7-b5ce-4908-b025-7ed0ff38398a\") " pod="openstack/nova-api-0" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.538488 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4981c3f7-b5ce-4908-b025-7ed0ff38398a-public-tls-certs\") pod \"nova-api-0\" (UID: \"4981c3f7-b5ce-4908-b025-7ed0ff38398a\") " pod="openstack/nova-api-0" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.539154 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4981c3f7-b5ce-4908-b025-7ed0ff38398a-logs\") pod \"nova-api-0\" (UID: \"4981c3f7-b5ce-4908-b025-7ed0ff38398a\") " pod="openstack/nova-api-0" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.543688 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4981c3f7-b5ce-4908-b025-7ed0ff38398a-config-data\") pod \"nova-api-0\" (UID: \"4981c3f7-b5ce-4908-b025-7ed0ff38398a\") " pod="openstack/nova-api-0" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.544082 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4981c3f7-b5ce-4908-b025-7ed0ff38398a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4981c3f7-b5ce-4908-b025-7ed0ff38398a\") " pod="openstack/nova-api-0" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.545378 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4981c3f7-b5ce-4908-b025-7ed0ff38398a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4981c3f7-b5ce-4908-b025-7ed0ff38398a\") " pod="openstack/nova-api-0" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.546010 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4981c3f7-b5ce-4908-b025-7ed0ff38398a-public-tls-certs\") pod \"nova-api-0\" (UID: \"4981c3f7-b5ce-4908-b025-7ed0ff38398a\") " pod="openstack/nova-api-0" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.554823 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j7hm\" (UniqueName: \"kubernetes.io/projected/4981c3f7-b5ce-4908-b025-7ed0ff38398a-kube-api-access-5j7hm\") pod \"nova-api-0\" (UID: \"4981c3f7-b5ce-4908-b025-7ed0ff38398a\") " pod="openstack/nova-api-0" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.650422 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 07:04:14 crc kubenswrapper[4520]: I0130 07:04:14.707253 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6f89801-2e02-4941-b7fd-6c91b53d8823" path="/var/lib/kubelet/pods/a6f89801-2e02-4941-b7fd-6c91b53d8823/volumes" Jan 30 07:04:15 crc kubenswrapper[4520]: I0130 07:04:15.122576 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 07:04:15 crc kubenswrapper[4520]: W0130 07:04:15.133001 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4981c3f7_b5ce_4908_b025_7ed0ff38398a.slice/crio-3f23a195705e02a14fdc9c993dce1bf0fff2837f22916fd2856cedf47ecae06f WatchSource:0}: Error finding container 3f23a195705e02a14fdc9c993dce1bf0fff2837f22916fd2856cedf47ecae06f: Status 404 returned error can't find the container with id 3f23a195705e02a14fdc9c993dce1bf0fff2837f22916fd2856cedf47ecae06f Jan 30 07:04:15 crc kubenswrapper[4520]: I0130 07:04:15.265192 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4981c3f7-b5ce-4908-b025-7ed0ff38398a","Type":"ContainerStarted","Data":"3f23a195705e02a14fdc9c993dce1bf0fff2837f22916fd2856cedf47ecae06f"} Jan 30 07:04:15 crc kubenswrapper[4520]: I0130 07:04:15.870834 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 07:04:15 crc kubenswrapper[4520]: I0130 07:04:15.872328 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 07:04:16 crc kubenswrapper[4520]: I0130 07:04:16.276253 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4981c3f7-b5ce-4908-b025-7ed0ff38398a","Type":"ContainerStarted","Data":"b95e2114b7a977bf0bbc0731c276cc30535286a4294b81142cf4a3bae022d0be"} Jan 30 07:04:16 crc kubenswrapper[4520]: I0130 07:04:16.276652 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4981c3f7-b5ce-4908-b025-7ed0ff38398a","Type":"ContainerStarted","Data":"89e15dd9a404bc144acec77d4f9a169ae717ac659ba1e9a97da98f1abffc687c"} Jan 30 07:04:16 crc kubenswrapper[4520]: I0130 07:04:16.298110 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.298089319 podStartE2EDuration="2.298089319s" podCreationTimestamp="2026-01-30 07:04:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:04:16.297162526 +0000 UTC m=+1169.925514707" watchObservedRunningTime="2026-01-30 07:04:16.298089319 +0000 UTC m=+1169.926441490" Jan 30 07:04:18 crc kubenswrapper[4520]: I0130 07:04:18.579783 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 07:04:18 crc kubenswrapper[4520]: I0130 07:04:18.611221 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 07:04:19 crc kubenswrapper[4520]: I0130 07:04:19.322310 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 07:04:20 crc kubenswrapper[4520]: I0130 07:04:20.870877 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 07:04:20 crc kubenswrapper[4520]: I0130 07:04:20.871197 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 07:04:21 crc kubenswrapper[4520]: I0130 07:04:21.890672 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0754b75b-0d69-44f7-907a-2495795bdfd0" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.222:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 07:04:21 crc kubenswrapper[4520]: I0130 07:04:21.890710 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0754b75b-0d69-44f7-907a-2495795bdfd0" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.222:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:04:24 crc kubenswrapper[4520]: I0130 07:04:24.650929 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 07:04:24 crc kubenswrapper[4520]: I0130 07:04:24.651238 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 07:04:25 crc kubenswrapper[4520]: I0130 07:04:25.677684 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4981c3f7-b5ce-4908-b025-7ed0ff38398a" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.223:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:04:25 crc kubenswrapper[4520]: I0130 07:04:25.677687 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4981c3f7-b5ce-4908-b025-7ed0ff38398a" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.223:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 07:04:28 crc kubenswrapper[4520]: I0130 07:04:28.429567 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 07:04:30 crc kubenswrapper[4520]: I0130 07:04:30.875389 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 07:04:30 crc kubenswrapper[4520]: I0130 07:04:30.879638 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 07:04:30 crc kubenswrapper[4520]: I0130 07:04:30.887858 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 07:04:31 crc kubenswrapper[4520]: I0130 07:04:31.504025 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 07:04:34 crc kubenswrapper[4520]: I0130 07:04:34.667358 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 07:04:34 crc kubenswrapper[4520]: I0130 07:04:34.668245 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 07:04:34 crc kubenswrapper[4520]: I0130 07:04:34.668432 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 07:04:34 crc kubenswrapper[4520]: I0130 07:04:34.673297 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 07:04:35 crc kubenswrapper[4520]: I0130 07:04:35.462866 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 07:04:35 crc kubenswrapper[4520]: I0130 07:04:35.469377 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 07:04:42 crc kubenswrapper[4520]: I0130 07:04:42.193712 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 07:04:43 crc kubenswrapper[4520]: I0130 07:04:43.205970 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 07:04:47 crc kubenswrapper[4520]: I0130 07:04:47.408757 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="8b8c48de-512c-4fd1-b2de-e0e0a4fb8184" containerName="rabbitmq" containerID="cri-o://191e311e5049d7a75ccb50ab93e9140e570a18bcb388d44b938b80045e61ff7c" gracePeriod=604795 Jan 30 07:04:47 crc kubenswrapper[4520]: I0130 07:04:47.788504 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="fc4abc0f-2827-4636-9942-342593697905" containerName="rabbitmq" containerID="cri-o://4f6a2217df55733c4a8753cc24d09c918992ad15e0a5e636694dd5b9b8c98f98" gracePeriod=604796 Jan 30 07:04:49 crc kubenswrapper[4520]: I0130 07:04:49.695639 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="8b8c48de-512c-4fd1-b2de-e0e0a4fb8184" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.94:5671: connect: connection refused" Jan 30 07:04:50 crc kubenswrapper[4520]: I0130 07:04:50.029016 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="fc4abc0f-2827-4636-9942-342593697905" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.95:5671: connect: connection refused" Jan 30 07:04:53 crc kubenswrapper[4520]: I0130 07:04:53.148543 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66b775f657-5j6x5"] Jan 30 07:04:53 crc kubenswrapper[4520]: I0130 07:04:53.150482 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66b775f657-5j6x5" Jan 30 07:04:53 crc kubenswrapper[4520]: I0130 07:04:53.156723 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 30 07:04:53 crc kubenswrapper[4520]: I0130 07:04:53.159848 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66b775f657-5j6x5"] Jan 30 07:04:53 crc kubenswrapper[4520]: I0130 07:04:53.231359 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33482717-70e7-430c-b38e-25d9cfaac08b-dns-swift-storage-0\") pod \"dnsmasq-dns-66b775f657-5j6x5\" (UID: \"33482717-70e7-430c-b38e-25d9cfaac08b\") " pod="openstack/dnsmasq-dns-66b775f657-5j6x5" Jan 30 07:04:53 crc kubenswrapper[4520]: I0130 07:04:53.231417 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33482717-70e7-430c-b38e-25d9cfaac08b-dns-svc\") pod \"dnsmasq-dns-66b775f657-5j6x5\" (UID: \"33482717-70e7-430c-b38e-25d9cfaac08b\") " pod="openstack/dnsmasq-dns-66b775f657-5j6x5" Jan 30 07:04:53 crc kubenswrapper[4520]: I0130 07:04:53.231640 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33482717-70e7-430c-b38e-25d9cfaac08b-ovsdbserver-nb\") pod \"dnsmasq-dns-66b775f657-5j6x5\" (UID: \"33482717-70e7-430c-b38e-25d9cfaac08b\") " pod="openstack/dnsmasq-dns-66b775f657-5j6x5" Jan 30 07:04:53 crc kubenswrapper[4520]: I0130 07:04:53.231795 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33482717-70e7-430c-b38e-25d9cfaac08b-ovsdbserver-sb\") pod \"dnsmasq-dns-66b775f657-5j6x5\" (UID: \"33482717-70e7-430c-b38e-25d9cfaac08b\") " pod="openstack/dnsmasq-dns-66b775f657-5j6x5" Jan 30 07:04:53 crc kubenswrapper[4520]: I0130 07:04:53.231966 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33482717-70e7-430c-b38e-25d9cfaac08b-config\") pod \"dnsmasq-dns-66b775f657-5j6x5\" (UID: \"33482717-70e7-430c-b38e-25d9cfaac08b\") " pod="openstack/dnsmasq-dns-66b775f657-5j6x5" Jan 30 07:04:53 crc kubenswrapper[4520]: I0130 07:04:53.232118 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/33482717-70e7-430c-b38e-25d9cfaac08b-openstack-edpm-ipam\") pod \"dnsmasq-dns-66b775f657-5j6x5\" (UID: \"33482717-70e7-430c-b38e-25d9cfaac08b\") " pod="openstack/dnsmasq-dns-66b775f657-5j6x5" Jan 30 07:04:53 crc kubenswrapper[4520]: I0130 07:04:53.232322 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d75fv\" (UniqueName: \"kubernetes.io/projected/33482717-70e7-430c-b38e-25d9cfaac08b-kube-api-access-d75fv\") pod \"dnsmasq-dns-66b775f657-5j6x5\" (UID: \"33482717-70e7-430c-b38e-25d9cfaac08b\") " pod="openstack/dnsmasq-dns-66b775f657-5j6x5" Jan 30 07:04:53 crc kubenswrapper[4520]: I0130 07:04:53.334851 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d75fv\" (UniqueName: \"kubernetes.io/projected/33482717-70e7-430c-b38e-25d9cfaac08b-kube-api-access-d75fv\") pod \"dnsmasq-dns-66b775f657-5j6x5\" (UID: \"33482717-70e7-430c-b38e-25d9cfaac08b\") " pod="openstack/dnsmasq-dns-66b775f657-5j6x5" Jan 30 07:04:53 crc kubenswrapper[4520]: I0130 07:04:53.334920 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33482717-70e7-430c-b38e-25d9cfaac08b-dns-swift-storage-0\") pod \"dnsmasq-dns-66b775f657-5j6x5\" (UID: \"33482717-70e7-430c-b38e-25d9cfaac08b\") " pod="openstack/dnsmasq-dns-66b775f657-5j6x5" Jan 30 07:04:53 crc kubenswrapper[4520]: I0130 07:04:53.334948 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33482717-70e7-430c-b38e-25d9cfaac08b-dns-svc\") pod \"dnsmasq-dns-66b775f657-5j6x5\" (UID: \"33482717-70e7-430c-b38e-25d9cfaac08b\") " pod="openstack/dnsmasq-dns-66b775f657-5j6x5" Jan 30 07:04:53 crc kubenswrapper[4520]: I0130 07:04:53.334998 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33482717-70e7-430c-b38e-25d9cfaac08b-ovsdbserver-nb\") pod \"dnsmasq-dns-66b775f657-5j6x5\" (UID: \"33482717-70e7-430c-b38e-25d9cfaac08b\") " pod="openstack/dnsmasq-dns-66b775f657-5j6x5" Jan 30 07:04:53 crc kubenswrapper[4520]: I0130 07:04:53.335033 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33482717-70e7-430c-b38e-25d9cfaac08b-ovsdbserver-sb\") pod \"dnsmasq-dns-66b775f657-5j6x5\" (UID: \"33482717-70e7-430c-b38e-25d9cfaac08b\") " pod="openstack/dnsmasq-dns-66b775f657-5j6x5" Jan 30 07:04:53 crc kubenswrapper[4520]: I0130 07:04:53.335092 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33482717-70e7-430c-b38e-25d9cfaac08b-config\") pod \"dnsmasq-dns-66b775f657-5j6x5\" (UID: \"33482717-70e7-430c-b38e-25d9cfaac08b\") " pod="openstack/dnsmasq-dns-66b775f657-5j6x5" Jan 30 07:04:53 crc kubenswrapper[4520]: I0130 07:04:53.335164 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/33482717-70e7-430c-b38e-25d9cfaac08b-openstack-edpm-ipam\") pod \"dnsmasq-dns-66b775f657-5j6x5\" (UID: \"33482717-70e7-430c-b38e-25d9cfaac08b\") " pod="openstack/dnsmasq-dns-66b775f657-5j6x5" Jan 30 07:04:53 crc kubenswrapper[4520]: I0130 07:04:53.336273 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33482717-70e7-430c-b38e-25d9cfaac08b-ovsdbserver-sb\") pod \"dnsmasq-dns-66b775f657-5j6x5\" (UID: \"33482717-70e7-430c-b38e-25d9cfaac08b\") " pod="openstack/dnsmasq-dns-66b775f657-5j6x5" Jan 30 07:04:53 crc kubenswrapper[4520]: I0130 07:04:53.336320 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/33482717-70e7-430c-b38e-25d9cfaac08b-openstack-edpm-ipam\") pod \"dnsmasq-dns-66b775f657-5j6x5\" (UID: \"33482717-70e7-430c-b38e-25d9cfaac08b\") " pod="openstack/dnsmasq-dns-66b775f657-5j6x5" Jan 30 07:04:53 crc kubenswrapper[4520]: I0130 07:04:53.336344 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33482717-70e7-430c-b38e-25d9cfaac08b-dns-svc\") pod \"dnsmasq-dns-66b775f657-5j6x5\" (UID: \"33482717-70e7-430c-b38e-25d9cfaac08b\") " pod="openstack/dnsmasq-dns-66b775f657-5j6x5" Jan 30 07:04:53 crc kubenswrapper[4520]: I0130 07:04:53.336677 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33482717-70e7-430c-b38e-25d9cfaac08b-config\") pod \"dnsmasq-dns-66b775f657-5j6x5\" (UID: \"33482717-70e7-430c-b38e-25d9cfaac08b\") " pod="openstack/dnsmasq-dns-66b775f657-5j6x5" Jan 30 07:04:53 crc kubenswrapper[4520]: I0130 07:04:53.336694 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33482717-70e7-430c-b38e-25d9cfaac08b-ovsdbserver-nb\") pod \"dnsmasq-dns-66b775f657-5j6x5\" (UID: \"33482717-70e7-430c-b38e-25d9cfaac08b\") " pod="openstack/dnsmasq-dns-66b775f657-5j6x5" Jan 30 07:04:53 crc kubenswrapper[4520]: I0130 07:04:53.336973 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33482717-70e7-430c-b38e-25d9cfaac08b-dns-swift-storage-0\") pod \"dnsmasq-dns-66b775f657-5j6x5\" (UID: \"33482717-70e7-430c-b38e-25d9cfaac08b\") " pod="openstack/dnsmasq-dns-66b775f657-5j6x5" Jan 30 07:04:53 crc kubenswrapper[4520]: I0130 07:04:53.358930 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d75fv\" (UniqueName: \"kubernetes.io/projected/33482717-70e7-430c-b38e-25d9cfaac08b-kube-api-access-d75fv\") pod \"dnsmasq-dns-66b775f657-5j6x5\" (UID: \"33482717-70e7-430c-b38e-25d9cfaac08b\") " pod="openstack/dnsmasq-dns-66b775f657-5j6x5" Jan 30 07:04:53 crc kubenswrapper[4520]: I0130 07:04:53.466640 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66b775f657-5j6x5" Jan 30 07:04:53 crc kubenswrapper[4520]: I0130 07:04:53.912989 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66b775f657-5j6x5"] Jan 30 07:04:53 crc kubenswrapper[4520]: I0130 07:04:53.994243 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.102040 4520 scope.go:117] "RemoveContainer" containerID="163e771c24eeb7d5133bc8d1013b839f3e5ccdaa9f64759d7a1ab8384a1b0f44" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.156361 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-server-conf\") pod \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.156403 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-config-data\") pod \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.156490 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-pod-info\") pod \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.156557 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-rabbitmq-tls\") pod \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.156589 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-rabbitmq-confd\") pod \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.156647 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lbhk\" (UniqueName: \"kubernetes.io/projected/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-kube-api-access-7lbhk\") pod \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.156718 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-rabbitmq-erlang-cookie\") pod \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.156740 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-rabbitmq-plugins\") pod \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.162359 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-erlang-cookie-secret\") pod \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.162389 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-plugins-conf\") pod \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.162426 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\" (UID: \"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184\") " Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.167395 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "8b8c48de-512c-4fd1-b2de-e0e0a4fb8184" (UID: "8b8c48de-512c-4fd1-b2de-e0e0a4fb8184"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.168252 4520 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.169315 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "8b8c48de-512c-4fd1-b2de-e0e0a4fb8184" (UID: "8b8c48de-512c-4fd1-b2de-e0e0a4fb8184"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.174598 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "8b8c48de-512c-4fd1-b2de-e0e0a4fb8184" (UID: "8b8c48de-512c-4fd1-b2de-e0e0a4fb8184"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.175665 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "8b8c48de-512c-4fd1-b2de-e0e0a4fb8184" (UID: "8b8c48de-512c-4fd1-b2de-e0e0a4fb8184"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.177121 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-kube-api-access-7lbhk" (OuterVolumeSpecName: "kube-api-access-7lbhk") pod "8b8c48de-512c-4fd1-b2de-e0e0a4fb8184" (UID: "8b8c48de-512c-4fd1-b2de-e0e0a4fb8184"). InnerVolumeSpecName "kube-api-access-7lbhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.178746 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "8b8c48de-512c-4fd1-b2de-e0e0a4fb8184" (UID: "8b8c48de-512c-4fd1-b2de-e0e0a4fb8184"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.180697 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-pod-info" (OuterVolumeSpecName: "pod-info") pod "8b8c48de-512c-4fd1-b2de-e0e0a4fb8184" (UID: "8b8c48de-512c-4fd1-b2de-e0e0a4fb8184"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.189391 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "8b8c48de-512c-4fd1-b2de-e0e0a4fb8184" (UID: "8b8c48de-512c-4fd1-b2de-e0e0a4fb8184"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.261212 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-config-data" (OuterVolumeSpecName: "config-data") pod "8b8c48de-512c-4fd1-b2de-e0e0a4fb8184" (UID: "8b8c48de-512c-4fd1-b2de-e0e0a4fb8184"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.265456 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-server-conf" (OuterVolumeSpecName: "server-conf") pod "8b8c48de-512c-4fd1-b2de-e0e0a4fb8184" (UID: "8b8c48de-512c-4fd1-b2de-e0e0a4fb8184"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.272570 4520 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.272599 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.272609 4520 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.272618 4520 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.272631 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lbhk\" (UniqueName: \"kubernetes.io/projected/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-kube-api-access-7lbhk\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.272641 4520 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.272658 4520 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.272668 4520 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.272698 4520 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.293728 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.308187 4520 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.359715 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "8b8c48de-512c-4fd1-b2de-e0e0a4fb8184" (UID: "8b8c48de-512c-4fd1-b2de-e0e0a4fb8184"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.391593 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fc4abc0f-2827-4636-9942-342593697905-config-data\") pod \"fc4abc0f-2827-4636-9942-342593697905\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.391773 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fc4abc0f-2827-4636-9942-342593697905-rabbitmq-tls\") pod \"fc4abc0f-2827-4636-9942-342593697905\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.391892 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fc4abc0f-2827-4636-9942-342593697905-rabbitmq-erlang-cookie\") pod \"fc4abc0f-2827-4636-9942-342593697905\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.392058 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fc4abc0f-2827-4636-9942-342593697905-server-conf\") pod \"fc4abc0f-2827-4636-9942-342593697905\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.392117 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fc4abc0f-2827-4636-9942-342593697905-pod-info\") pod \"fc4abc0f-2827-4636-9942-342593697905\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.392170 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fc4abc0f-2827-4636-9942-342593697905-erlang-cookie-secret\") pod \"fc4abc0f-2827-4636-9942-342593697905\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.392252 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fc4abc0f-2827-4636-9942-342593697905-rabbitmq-plugins\") pod \"fc4abc0f-2827-4636-9942-342593697905\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.392345 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sjzl\" (UniqueName: \"kubernetes.io/projected/fc4abc0f-2827-4636-9942-342593697905-kube-api-access-7sjzl\") pod \"fc4abc0f-2827-4636-9942-342593697905\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.392423 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fc4abc0f-2827-4636-9942-342593697905-rabbitmq-confd\") pod \"fc4abc0f-2827-4636-9942-342593697905\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.392479 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"fc4abc0f-2827-4636-9942-342593697905\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.394196 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fc4abc0f-2827-4636-9942-342593697905-plugins-conf\") pod \"fc4abc0f-2827-4636-9942-342593697905\" (UID: \"fc4abc0f-2827-4636-9942-342593697905\") " Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.395212 4520 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.395235 4520 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.403114 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc4abc0f-2827-4636-9942-342593697905-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "fc4abc0f-2827-4636-9942-342593697905" (UID: "fc4abc0f-2827-4636-9942-342593697905"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.403947 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc4abc0f-2827-4636-9942-342593697905-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "fc4abc0f-2827-4636-9942-342593697905" (UID: "fc4abc0f-2827-4636-9942-342593697905"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.410845 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc4abc0f-2827-4636-9942-342593697905-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "fc4abc0f-2827-4636-9942-342593697905" (UID: "fc4abc0f-2827-4636-9942-342593697905"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.414228 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc4abc0f-2827-4636-9942-342593697905-kube-api-access-7sjzl" (OuterVolumeSpecName: "kube-api-access-7sjzl") pod "fc4abc0f-2827-4636-9942-342593697905" (UID: "fc4abc0f-2827-4636-9942-342593697905"). InnerVolumeSpecName "kube-api-access-7sjzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.417134 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc4abc0f-2827-4636-9942-342593697905-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "fc4abc0f-2827-4636-9942-342593697905" (UID: "fc4abc0f-2827-4636-9942-342593697905"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.419265 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "fc4abc0f-2827-4636-9942-342593697905" (UID: "fc4abc0f-2827-4636-9942-342593697905"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.429545 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc4abc0f-2827-4636-9942-342593697905-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "fc4abc0f-2827-4636-9942-342593697905" (UID: "fc4abc0f-2827-4636-9942-342593697905"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.431932 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/fc4abc0f-2827-4636-9942-342593697905-pod-info" (OuterVolumeSpecName: "pod-info") pod "fc4abc0f-2827-4636-9942-342593697905" (UID: "fc4abc0f-2827-4636-9942-342593697905"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.445275 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc4abc0f-2827-4636-9942-342593697905-config-data" (OuterVolumeSpecName: "config-data") pod "fc4abc0f-2827-4636-9942-342593697905" (UID: "fc4abc0f-2827-4636-9942-342593697905"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.476046 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc4abc0f-2827-4636-9942-342593697905-server-conf" (OuterVolumeSpecName: "server-conf") pod "fc4abc0f-2827-4636-9942-342593697905" (UID: "fc4abc0f-2827-4636-9942-342593697905"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.498475 4520 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fc4abc0f-2827-4636-9942-342593697905-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.498867 4520 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fc4abc0f-2827-4636-9942-342593697905-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.498945 4520 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fc4abc0f-2827-4636-9942-342593697905-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.498997 4520 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fc4abc0f-2827-4636-9942-342593697905-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.499045 4520 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fc4abc0f-2827-4636-9942-342593697905-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.499091 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sjzl\" (UniqueName: \"kubernetes.io/projected/fc4abc0f-2827-4636-9942-342593697905-kube-api-access-7sjzl\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.499155 4520 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.499214 4520 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fc4abc0f-2827-4636-9942-342593697905-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.499263 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fc4abc0f-2827-4636-9942-342593697905-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.499314 4520 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fc4abc0f-2827-4636-9942-342593697905-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.536642 4520 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.561217 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc4abc0f-2827-4636-9942-342593697905-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "fc4abc0f-2827-4636-9942-342593697905" (UID: "fc4abc0f-2827-4636-9942-342593697905"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.601078 4520 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fc4abc0f-2827-4636-9942-342593697905-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.601107 4520 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.657911 4520 generic.go:334] "Generic (PLEG): container finished" podID="33482717-70e7-430c-b38e-25d9cfaac08b" containerID="df4f5bf4dcaab6e7195db0d2eb26e05573320bf82976e0554317cfcfc986bc73" exitCode=0 Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.657982 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66b775f657-5j6x5" event={"ID":"33482717-70e7-430c-b38e-25d9cfaac08b","Type":"ContainerDied","Data":"df4f5bf4dcaab6e7195db0d2eb26e05573320bf82976e0554317cfcfc986bc73"} Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.658010 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66b775f657-5j6x5" event={"ID":"33482717-70e7-430c-b38e-25d9cfaac08b","Type":"ContainerStarted","Data":"062b41fc406d24feb122f6ad5c18af27766adff4569c93d9c781e230162e0f9d"} Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.661307 4520 generic.go:334] "Generic (PLEG): container finished" podID="fc4abc0f-2827-4636-9942-342593697905" containerID="4f6a2217df55733c4a8753cc24d09c918992ad15e0a5e636694dd5b9b8c98f98" exitCode=0 Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.661355 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fc4abc0f-2827-4636-9942-342593697905","Type":"ContainerDied","Data":"4f6a2217df55733c4a8753cc24d09c918992ad15e0a5e636694dd5b9b8c98f98"} Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.661372 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fc4abc0f-2827-4636-9942-342593697905","Type":"ContainerDied","Data":"ba8de7795f3e191f6c65534006eb557c295f434582651b1d8f7277f7ef9b45be"} Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.661422 4520 scope.go:117] "RemoveContainer" containerID="4f6a2217df55733c4a8753cc24d09c918992ad15e0a5e636694dd5b9b8c98f98" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.661577 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.665137 4520 generic.go:334] "Generic (PLEG): container finished" podID="8b8c48de-512c-4fd1-b2de-e0e0a4fb8184" containerID="191e311e5049d7a75ccb50ab93e9140e570a18bcb388d44b938b80045e61ff7c" exitCode=0 Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.665189 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.665200 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184","Type":"ContainerDied","Data":"191e311e5049d7a75ccb50ab93e9140e570a18bcb388d44b938b80045e61ff7c"} Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.665584 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8b8c48de-512c-4fd1-b2de-e0e0a4fb8184","Type":"ContainerDied","Data":"02eda79311ba45f35adc42ede213147af0a559fdc676fafdf0875da6846faf29"} Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.756210 4520 scope.go:117] "RemoveContainer" containerID="d7a6d151df430a61dcc4b3c25d238a677c1755d79bfba40e96fd5f6557baebe2" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.772765 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.796713 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.817833 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.821354 4520 scope.go:117] "RemoveContainer" containerID="4f6a2217df55733c4a8753cc24d09c918992ad15e0a5e636694dd5b9b8c98f98" Jan 30 07:04:54 crc kubenswrapper[4520]: E0130 07:04:54.823197 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f6a2217df55733c4a8753cc24d09c918992ad15e0a5e636694dd5b9b8c98f98\": container with ID starting with 4f6a2217df55733c4a8753cc24d09c918992ad15e0a5e636694dd5b9b8c98f98 not found: ID does not exist" containerID="4f6a2217df55733c4a8753cc24d09c918992ad15e0a5e636694dd5b9b8c98f98" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.823293 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f6a2217df55733c4a8753cc24d09c918992ad15e0a5e636694dd5b9b8c98f98"} err="failed to get container status \"4f6a2217df55733c4a8753cc24d09c918992ad15e0a5e636694dd5b9b8c98f98\": rpc error: code = NotFound desc = could not find container \"4f6a2217df55733c4a8753cc24d09c918992ad15e0a5e636694dd5b9b8c98f98\": container with ID starting with 4f6a2217df55733c4a8753cc24d09c918992ad15e0a5e636694dd5b9b8c98f98 not found: ID does not exist" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.823331 4520 scope.go:117] "RemoveContainer" containerID="d7a6d151df430a61dcc4b3c25d238a677c1755d79bfba40e96fd5f6557baebe2" Jan 30 07:04:54 crc kubenswrapper[4520]: E0130 07:04:54.824243 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7a6d151df430a61dcc4b3c25d238a677c1755d79bfba40e96fd5f6557baebe2\": container with ID starting with d7a6d151df430a61dcc4b3c25d238a677c1755d79bfba40e96fd5f6557baebe2 not found: ID does not exist" containerID="d7a6d151df430a61dcc4b3c25d238a677c1755d79bfba40e96fd5f6557baebe2" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.824305 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7a6d151df430a61dcc4b3c25d238a677c1755d79bfba40e96fd5f6557baebe2"} err="failed to get container status \"d7a6d151df430a61dcc4b3c25d238a677c1755d79bfba40e96fd5f6557baebe2\": rpc error: code = NotFound desc = could not find container \"d7a6d151df430a61dcc4b3c25d238a677c1755d79bfba40e96fd5f6557baebe2\": container with ID starting with d7a6d151df430a61dcc4b3c25d238a677c1755d79bfba40e96fd5f6557baebe2 not found: ID does not exist" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.824337 4520 scope.go:117] "RemoveContainer" containerID="191e311e5049d7a75ccb50ab93e9140e570a18bcb388d44b938b80045e61ff7c" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.852561 4520 scope.go:117] "RemoveContainer" containerID="191e311e5049d7a75ccb50ab93e9140e570a18bcb388d44b938b80045e61ff7c" Jan 30 07:04:54 crc kubenswrapper[4520]: E0130 07:04:54.853064 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"191e311e5049d7a75ccb50ab93e9140e570a18bcb388d44b938b80045e61ff7c\": container with ID starting with 191e311e5049d7a75ccb50ab93e9140e570a18bcb388d44b938b80045e61ff7c not found: ID does not exist" containerID="191e311e5049d7a75ccb50ab93e9140e570a18bcb388d44b938b80045e61ff7c" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.854551 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"191e311e5049d7a75ccb50ab93e9140e570a18bcb388d44b938b80045e61ff7c"} err="failed to get container status \"191e311e5049d7a75ccb50ab93e9140e570a18bcb388d44b938b80045e61ff7c\": rpc error: code = NotFound desc = could not find container \"191e311e5049d7a75ccb50ab93e9140e570a18bcb388d44b938b80045e61ff7c\": container with ID starting with 191e311e5049d7a75ccb50ab93e9140e570a18bcb388d44b938b80045e61ff7c not found: ID does not exist" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.868558 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.886461 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 07:04:54 crc kubenswrapper[4520]: E0130 07:04:54.887715 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b8c48de-512c-4fd1-b2de-e0e0a4fb8184" containerName="rabbitmq" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.887739 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b8c48de-512c-4fd1-b2de-e0e0a4fb8184" containerName="rabbitmq" Jan 30 07:04:54 crc kubenswrapper[4520]: E0130 07:04:54.887768 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc4abc0f-2827-4636-9942-342593697905" containerName="setup-container" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.887776 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc4abc0f-2827-4636-9942-342593697905" containerName="setup-container" Jan 30 07:04:54 crc kubenswrapper[4520]: E0130 07:04:54.887797 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc4abc0f-2827-4636-9942-342593697905" containerName="rabbitmq" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.887803 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc4abc0f-2827-4636-9942-342593697905" containerName="rabbitmq" Jan 30 07:04:54 crc kubenswrapper[4520]: E0130 07:04:54.887825 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b8c48de-512c-4fd1-b2de-e0e0a4fb8184" containerName="setup-container" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.887832 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b8c48de-512c-4fd1-b2de-e0e0a4fb8184" containerName="setup-container" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.888010 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc4abc0f-2827-4636-9942-342593697905" containerName="rabbitmq" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.888047 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b8c48de-512c-4fd1-b2de-e0e0a4fb8184" containerName="rabbitmq" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.889294 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.892222 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.892791 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.892898 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.892959 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-6kzwj" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.893025 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.893272 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.894363 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.900431 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.917275 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.919148 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.921836 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.923798 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.923967 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.924113 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.924231 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-ndz6j" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.924444 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.926016 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.928212 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6de9b0a4-9862-4be2-8d01-aba25647bf18-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6de9b0a4-9862-4be2-8d01-aba25647bf18\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.928255 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6de9b0a4-9862-4be2-8d01-aba25647bf18-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6de9b0a4-9862-4be2-8d01-aba25647bf18\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.928314 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6de9b0a4-9862-4be2-8d01-aba25647bf18-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6de9b0a4-9862-4be2-8d01-aba25647bf18\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.928347 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6de9b0a4-9862-4be2-8d01-aba25647bf18-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6de9b0a4-9862-4be2-8d01-aba25647bf18\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.928487 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6de9b0a4-9862-4be2-8d01-aba25647bf18\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.928569 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6de9b0a4-9862-4be2-8d01-aba25647bf18-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6de9b0a4-9862-4be2-8d01-aba25647bf18\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.928658 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5nhc\" (UniqueName: \"kubernetes.io/projected/6de9b0a4-9862-4be2-8d01-aba25647bf18-kube-api-access-f5nhc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6de9b0a4-9862-4be2-8d01-aba25647bf18\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.928688 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6de9b0a4-9862-4be2-8d01-aba25647bf18-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6de9b0a4-9862-4be2-8d01-aba25647bf18\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.928714 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6de9b0a4-9862-4be2-8d01-aba25647bf18-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6de9b0a4-9862-4be2-8d01-aba25647bf18\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.928767 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6de9b0a4-9862-4be2-8d01-aba25647bf18-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6de9b0a4-9862-4be2-8d01-aba25647bf18\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.928790 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6de9b0a4-9862-4be2-8d01-aba25647bf18-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6de9b0a4-9862-4be2-8d01-aba25647bf18\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:54 crc kubenswrapper[4520]: I0130 07:04:54.939591 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.030034 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6de9b0a4-9862-4be2-8d01-aba25647bf18-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6de9b0a4-9862-4be2-8d01-aba25647bf18\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.030089 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6de9b0a4-9862-4be2-8d01-aba25647bf18-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6de9b0a4-9862-4be2-8d01-aba25647bf18\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.030124 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/59975299-c1c9-4ccd-a067-964e4644bc92-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"59975299-c1c9-4ccd-a067-964e4644bc92\") " pod="openstack/rabbitmq-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.030152 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/59975299-c1c9-4ccd-a067-964e4644bc92-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"59975299-c1c9-4ccd-a067-964e4644bc92\") " pod="openstack/rabbitmq-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.030175 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6de9b0a4-9862-4be2-8d01-aba25647bf18\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.030203 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/59975299-c1c9-4ccd-a067-964e4644bc92-server-conf\") pod \"rabbitmq-server-0\" (UID: \"59975299-c1c9-4ccd-a067-964e4644bc92\") " pod="openstack/rabbitmq-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.030227 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6de9b0a4-9862-4be2-8d01-aba25647bf18-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6de9b0a4-9862-4be2-8d01-aba25647bf18\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.030264 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5nhc\" (UniqueName: \"kubernetes.io/projected/6de9b0a4-9862-4be2-8d01-aba25647bf18-kube-api-access-f5nhc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6de9b0a4-9862-4be2-8d01-aba25647bf18\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.030290 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6de9b0a4-9862-4be2-8d01-aba25647bf18-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6de9b0a4-9862-4be2-8d01-aba25647bf18\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.030320 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6de9b0a4-9862-4be2-8d01-aba25647bf18-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6de9b0a4-9862-4be2-8d01-aba25647bf18\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.030352 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6de9b0a4-9862-4be2-8d01-aba25647bf18-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6de9b0a4-9862-4be2-8d01-aba25647bf18\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.030378 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6de9b0a4-9862-4be2-8d01-aba25647bf18-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6de9b0a4-9862-4be2-8d01-aba25647bf18\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.030401 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/59975299-c1c9-4ccd-a067-964e4644bc92-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"59975299-c1c9-4ccd-a067-964e4644bc92\") " pod="openstack/rabbitmq-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.030428 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/59975299-c1c9-4ccd-a067-964e4644bc92-pod-info\") pod \"rabbitmq-server-0\" (UID: \"59975299-c1c9-4ccd-a067-964e4644bc92\") " pod="openstack/rabbitmq-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.030466 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/59975299-c1c9-4ccd-a067-964e4644bc92-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"59975299-c1c9-4ccd-a067-964e4644bc92\") " pod="openstack/rabbitmq-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.030487 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59975299-c1c9-4ccd-a067-964e4644bc92-config-data\") pod \"rabbitmq-server-0\" (UID: \"59975299-c1c9-4ccd-a067-964e4644bc92\") " pod="openstack/rabbitmq-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.030531 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldkz7\" (UniqueName: \"kubernetes.io/projected/59975299-c1c9-4ccd-a067-964e4644bc92-kube-api-access-ldkz7\") pod \"rabbitmq-server-0\" (UID: \"59975299-c1c9-4ccd-a067-964e4644bc92\") " pod="openstack/rabbitmq-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.030564 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6de9b0a4-9862-4be2-8d01-aba25647bf18-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6de9b0a4-9862-4be2-8d01-aba25647bf18\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.030589 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6de9b0a4-9862-4be2-8d01-aba25647bf18-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6de9b0a4-9862-4be2-8d01-aba25647bf18\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.030617 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/59975299-c1c9-4ccd-a067-964e4644bc92-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"59975299-c1c9-4ccd-a067-964e4644bc92\") " pod="openstack/rabbitmq-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.030646 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"59975299-c1c9-4ccd-a067-964e4644bc92\") " pod="openstack/rabbitmq-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.030674 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/59975299-c1c9-4ccd-a067-964e4644bc92-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"59975299-c1c9-4ccd-a067-964e4644bc92\") " pod="openstack/rabbitmq-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.031765 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6de9b0a4-9862-4be2-8d01-aba25647bf18-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6de9b0a4-9862-4be2-8d01-aba25647bf18\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.031763 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6de9b0a4-9862-4be2-8d01-aba25647bf18-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6de9b0a4-9862-4be2-8d01-aba25647bf18\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.032060 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6de9b0a4-9862-4be2-8d01-aba25647bf18-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6de9b0a4-9862-4be2-8d01-aba25647bf18\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.032232 4520 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6de9b0a4-9862-4be2-8d01-aba25647bf18\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.034990 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6de9b0a4-9862-4be2-8d01-aba25647bf18-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6de9b0a4-9862-4be2-8d01-aba25647bf18\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.035150 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6de9b0a4-9862-4be2-8d01-aba25647bf18-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6de9b0a4-9862-4be2-8d01-aba25647bf18\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.035685 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6de9b0a4-9862-4be2-8d01-aba25647bf18-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6de9b0a4-9862-4be2-8d01-aba25647bf18\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.036582 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6de9b0a4-9862-4be2-8d01-aba25647bf18-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6de9b0a4-9862-4be2-8d01-aba25647bf18\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.039296 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6de9b0a4-9862-4be2-8d01-aba25647bf18-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6de9b0a4-9862-4be2-8d01-aba25647bf18\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.039315 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6de9b0a4-9862-4be2-8d01-aba25647bf18-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6de9b0a4-9862-4be2-8d01-aba25647bf18\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.051182 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5nhc\" (UniqueName: \"kubernetes.io/projected/6de9b0a4-9862-4be2-8d01-aba25647bf18-kube-api-access-f5nhc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6de9b0a4-9862-4be2-8d01-aba25647bf18\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.066476 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6de9b0a4-9862-4be2-8d01-aba25647bf18\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.133076 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/59975299-c1c9-4ccd-a067-964e4644bc92-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"59975299-c1c9-4ccd-a067-964e4644bc92\") " pod="openstack/rabbitmq-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.133150 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/59975299-c1c9-4ccd-a067-964e4644bc92-pod-info\") pod \"rabbitmq-server-0\" (UID: \"59975299-c1c9-4ccd-a067-964e4644bc92\") " pod="openstack/rabbitmq-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.133219 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/59975299-c1c9-4ccd-a067-964e4644bc92-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"59975299-c1c9-4ccd-a067-964e4644bc92\") " pod="openstack/rabbitmq-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.133243 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59975299-c1c9-4ccd-a067-964e4644bc92-config-data\") pod \"rabbitmq-server-0\" (UID: \"59975299-c1c9-4ccd-a067-964e4644bc92\") " pod="openstack/rabbitmq-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.133279 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldkz7\" (UniqueName: \"kubernetes.io/projected/59975299-c1c9-4ccd-a067-964e4644bc92-kube-api-access-ldkz7\") pod \"rabbitmq-server-0\" (UID: \"59975299-c1c9-4ccd-a067-964e4644bc92\") " pod="openstack/rabbitmq-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.133325 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/59975299-c1c9-4ccd-a067-964e4644bc92-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"59975299-c1c9-4ccd-a067-964e4644bc92\") " pod="openstack/rabbitmq-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.133361 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"59975299-c1c9-4ccd-a067-964e4644bc92\") " pod="openstack/rabbitmq-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.133387 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/59975299-c1c9-4ccd-a067-964e4644bc92-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"59975299-c1c9-4ccd-a067-964e4644bc92\") " pod="openstack/rabbitmq-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.133431 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/59975299-c1c9-4ccd-a067-964e4644bc92-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"59975299-c1c9-4ccd-a067-964e4644bc92\") " pod="openstack/rabbitmq-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.133450 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/59975299-c1c9-4ccd-a067-964e4644bc92-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"59975299-c1c9-4ccd-a067-964e4644bc92\") " pod="openstack/rabbitmq-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.133479 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/59975299-c1c9-4ccd-a067-964e4644bc92-server-conf\") pod \"rabbitmq-server-0\" (UID: \"59975299-c1c9-4ccd-a067-964e4644bc92\") " pod="openstack/rabbitmq-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.133660 4520 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"59975299-c1c9-4ccd-a067-964e4644bc92\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.134954 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/59975299-c1c9-4ccd-a067-964e4644bc92-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"59975299-c1c9-4ccd-a067-964e4644bc92\") " pod="openstack/rabbitmq-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.134481 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/59975299-c1c9-4ccd-a067-964e4644bc92-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"59975299-c1c9-4ccd-a067-964e4644bc92\") " pod="openstack/rabbitmq-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.134721 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/59975299-c1c9-4ccd-a067-964e4644bc92-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"59975299-c1c9-4ccd-a067-964e4644bc92\") " pod="openstack/rabbitmq-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.134811 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/59975299-c1c9-4ccd-a067-964e4644bc92-server-conf\") pod \"rabbitmq-server-0\" (UID: \"59975299-c1c9-4ccd-a067-964e4644bc92\") " pod="openstack/rabbitmq-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.134333 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59975299-c1c9-4ccd-a067-964e4644bc92-config-data\") pod \"rabbitmq-server-0\" (UID: \"59975299-c1c9-4ccd-a067-964e4644bc92\") " pod="openstack/rabbitmq-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.137666 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/59975299-c1c9-4ccd-a067-964e4644bc92-pod-info\") pod \"rabbitmq-server-0\" (UID: \"59975299-c1c9-4ccd-a067-964e4644bc92\") " pod="openstack/rabbitmq-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.137946 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/59975299-c1c9-4ccd-a067-964e4644bc92-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"59975299-c1c9-4ccd-a067-964e4644bc92\") " pod="openstack/rabbitmq-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.137945 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/59975299-c1c9-4ccd-a067-964e4644bc92-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"59975299-c1c9-4ccd-a067-964e4644bc92\") " pod="openstack/rabbitmq-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.138437 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/59975299-c1c9-4ccd-a067-964e4644bc92-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"59975299-c1c9-4ccd-a067-964e4644bc92\") " pod="openstack/rabbitmq-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.149636 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldkz7\" (UniqueName: \"kubernetes.io/projected/59975299-c1c9-4ccd-a067-964e4644bc92-kube-api-access-ldkz7\") pod \"rabbitmq-server-0\" (UID: \"59975299-c1c9-4ccd-a067-964e4644bc92\") " pod="openstack/rabbitmq-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.173703 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"59975299-c1c9-4ccd-a067-964e4644bc92\") " pod="openstack/rabbitmq-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.216203 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.250056 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.557320 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.699200 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6de9b0a4-9862-4be2-8d01-aba25647bf18","Type":"ContainerStarted","Data":"0e3b37d836acffb57f01816b79a6a4b7d74bbcb019d7fae305c504fdd8224f3f"} Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.711749 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66b775f657-5j6x5" event={"ID":"33482717-70e7-430c-b38e-25d9cfaac08b","Type":"ContainerStarted","Data":"7d5f4059888a39bdff43a1637c8ab8b18dfa83831f605d0b36c32416ee8c7ace"} Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.712567 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-66b775f657-5j6x5" Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.748667 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 07:04:55 crc kubenswrapper[4520]: I0130 07:04:55.752871 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-66b775f657-5j6x5" podStartSLOduration=2.752857017 podStartE2EDuration="2.752857017s" podCreationTimestamp="2026-01-30 07:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:04:55.74442356 +0000 UTC m=+1209.372775742" watchObservedRunningTime="2026-01-30 07:04:55.752857017 +0000 UTC m=+1209.381209198" Jan 30 07:04:56 crc kubenswrapper[4520]: I0130 07:04:56.697576 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b8c48de-512c-4fd1-b2de-e0e0a4fb8184" path="/var/lib/kubelet/pods/8b8c48de-512c-4fd1-b2de-e0e0a4fb8184/volumes" Jan 30 07:04:56 crc kubenswrapper[4520]: I0130 07:04:56.699394 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc4abc0f-2827-4636-9942-342593697905" path="/var/lib/kubelet/pods/fc4abc0f-2827-4636-9942-342593697905/volumes" Jan 30 07:04:56 crc kubenswrapper[4520]: I0130 07:04:56.729379 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"59975299-c1c9-4ccd-a067-964e4644bc92","Type":"ContainerStarted","Data":"8cb9fbd37f5abcdb9970005acaa72f276aeffbcdcbaff4c72b779d87cc10bed8"} Jan 30 07:04:57 crc kubenswrapper[4520]: I0130 07:04:57.738304 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"59975299-c1c9-4ccd-a067-964e4644bc92","Type":"ContainerStarted","Data":"fa8432e4b7663a2c9b18efb43fb4ef8bd76809715123db77ad906131f581cf61"} Jan 30 07:04:57 crc kubenswrapper[4520]: I0130 07:04:57.740114 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6de9b0a4-9862-4be2-8d01-aba25647bf18","Type":"ContainerStarted","Data":"94b431ee0406c629dc9aae44a9e153e06a794e6cb5bbffefec79eba4b86acd66"} Jan 30 07:04:57 crc kubenswrapper[4520]: I0130 07:04:57.793414 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 07:04:57 crc kubenswrapper[4520]: I0130 07:04:57.793503 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 07:05:03 crc kubenswrapper[4520]: I0130 07:05:03.468600 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-66b775f657-5j6x5" Jan 30 07:05:03 crc kubenswrapper[4520]: I0130 07:05:03.529563 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f464775-8fv4z"] Jan 30 07:05:03 crc kubenswrapper[4520]: I0130 07:05:03.529824 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f464775-8fv4z" podUID="c24cc31d-16b3-4859-a413-dbb766b276e2" containerName="dnsmasq-dns" containerID="cri-o://2b197a0fe1610f4ce5f7ecfb357fa5389a403593122ba7d89f9ccb5a7776245b" gracePeriod=10 Jan 30 07:05:03 crc kubenswrapper[4520]: I0130 07:05:03.679369 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57c4f6c9f-q6cvm"] Jan 30 07:05:03 crc kubenswrapper[4520]: I0130 07:05:03.692818 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c4f6c9f-q6cvm" Jan 30 07:05:03 crc kubenswrapper[4520]: I0130 07:05:03.701764 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c4f6c9f-q6cvm"] Jan 30 07:05:03 crc kubenswrapper[4520]: I0130 07:05:03.817705 4520 generic.go:334] "Generic (PLEG): container finished" podID="c24cc31d-16b3-4859-a413-dbb766b276e2" containerID="2b197a0fe1610f4ce5f7ecfb357fa5389a403593122ba7d89f9ccb5a7776245b" exitCode=0 Jan 30 07:05:03 crc kubenswrapper[4520]: I0130 07:05:03.817755 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f464775-8fv4z" event={"ID":"c24cc31d-16b3-4859-a413-dbb766b276e2","Type":"ContainerDied","Data":"2b197a0fe1610f4ce5f7ecfb357fa5389a403593122ba7d89f9ccb5a7776245b"} Jan 30 07:05:03 crc kubenswrapper[4520]: I0130 07:05:03.827138 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7-openstack-edpm-ipam\") pod \"dnsmasq-dns-57c4f6c9f-q6cvm\" (UID: \"87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7\") " pod="openstack/dnsmasq-dns-57c4f6c9f-q6cvm" Jan 30 07:05:03 crc kubenswrapper[4520]: I0130 07:05:03.827265 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7-config\") pod \"dnsmasq-dns-57c4f6c9f-q6cvm\" (UID: \"87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7\") " pod="openstack/dnsmasq-dns-57c4f6c9f-q6cvm" Jan 30 07:05:03 crc kubenswrapper[4520]: I0130 07:05:03.827380 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7-dns-svc\") pod \"dnsmasq-dns-57c4f6c9f-q6cvm\" (UID: \"87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7\") " pod="openstack/dnsmasq-dns-57c4f6c9f-q6cvm" Jan 30 07:05:03 crc kubenswrapper[4520]: I0130 07:05:03.827446 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7-ovsdbserver-sb\") pod \"dnsmasq-dns-57c4f6c9f-q6cvm\" (UID: \"87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7\") " pod="openstack/dnsmasq-dns-57c4f6c9f-q6cvm" Jan 30 07:05:03 crc kubenswrapper[4520]: I0130 07:05:03.827630 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fmxc\" (UniqueName: \"kubernetes.io/projected/87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7-kube-api-access-5fmxc\") pod \"dnsmasq-dns-57c4f6c9f-q6cvm\" (UID: \"87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7\") " pod="openstack/dnsmasq-dns-57c4f6c9f-q6cvm" Jan 30 07:05:03 crc kubenswrapper[4520]: I0130 07:05:03.827666 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7-ovsdbserver-nb\") pod \"dnsmasq-dns-57c4f6c9f-q6cvm\" (UID: \"87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7\") " pod="openstack/dnsmasq-dns-57c4f6c9f-q6cvm" Jan 30 07:05:03 crc kubenswrapper[4520]: I0130 07:05:03.827756 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7-dns-swift-storage-0\") pod \"dnsmasq-dns-57c4f6c9f-q6cvm\" (UID: \"87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7\") " pod="openstack/dnsmasq-dns-57c4f6c9f-q6cvm" Jan 30 07:05:03 crc kubenswrapper[4520]: I0130 07:05:03.931944 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7-config\") pod \"dnsmasq-dns-57c4f6c9f-q6cvm\" (UID: \"87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7\") " pod="openstack/dnsmasq-dns-57c4f6c9f-q6cvm" Jan 30 07:05:03 crc kubenswrapper[4520]: I0130 07:05:03.932019 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7-dns-svc\") pod \"dnsmasq-dns-57c4f6c9f-q6cvm\" (UID: \"87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7\") " pod="openstack/dnsmasq-dns-57c4f6c9f-q6cvm" Jan 30 07:05:03 crc kubenswrapper[4520]: I0130 07:05:03.932080 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7-ovsdbserver-sb\") pod \"dnsmasq-dns-57c4f6c9f-q6cvm\" (UID: \"87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7\") " pod="openstack/dnsmasq-dns-57c4f6c9f-q6cvm" Jan 30 07:05:03 crc kubenswrapper[4520]: I0130 07:05:03.932191 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fmxc\" (UniqueName: \"kubernetes.io/projected/87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7-kube-api-access-5fmxc\") pod \"dnsmasq-dns-57c4f6c9f-q6cvm\" (UID: \"87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7\") " pod="openstack/dnsmasq-dns-57c4f6c9f-q6cvm" Jan 30 07:05:03 crc kubenswrapper[4520]: I0130 07:05:03.932224 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7-ovsdbserver-nb\") pod \"dnsmasq-dns-57c4f6c9f-q6cvm\" (UID: \"87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7\") " pod="openstack/dnsmasq-dns-57c4f6c9f-q6cvm" Jan 30 07:05:03 crc kubenswrapper[4520]: I0130 07:05:03.932265 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7-dns-swift-storage-0\") pod \"dnsmasq-dns-57c4f6c9f-q6cvm\" (UID: \"87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7\") " pod="openstack/dnsmasq-dns-57c4f6c9f-q6cvm" Jan 30 07:05:03 crc kubenswrapper[4520]: I0130 07:05:03.932347 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7-openstack-edpm-ipam\") pod \"dnsmasq-dns-57c4f6c9f-q6cvm\" (UID: \"87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7\") " pod="openstack/dnsmasq-dns-57c4f6c9f-q6cvm" Jan 30 07:05:03 crc kubenswrapper[4520]: I0130 07:05:03.933134 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7-dns-svc\") pod \"dnsmasq-dns-57c4f6c9f-q6cvm\" (UID: \"87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7\") " pod="openstack/dnsmasq-dns-57c4f6c9f-q6cvm" Jan 30 07:05:03 crc kubenswrapper[4520]: I0130 07:05:03.933423 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7-openstack-edpm-ipam\") pod \"dnsmasq-dns-57c4f6c9f-q6cvm\" (UID: \"87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7\") " pod="openstack/dnsmasq-dns-57c4f6c9f-q6cvm" Jan 30 07:05:03 crc kubenswrapper[4520]: I0130 07:05:03.933496 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7-config\") pod \"dnsmasq-dns-57c4f6c9f-q6cvm\" (UID: \"87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7\") " pod="openstack/dnsmasq-dns-57c4f6c9f-q6cvm" Jan 30 07:05:03 crc kubenswrapper[4520]: I0130 07:05:03.934093 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7-ovsdbserver-nb\") pod \"dnsmasq-dns-57c4f6c9f-q6cvm\" (UID: \"87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7\") " pod="openstack/dnsmasq-dns-57c4f6c9f-q6cvm" Jan 30 07:05:03 crc kubenswrapper[4520]: I0130 07:05:03.934185 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7-ovsdbserver-sb\") pod \"dnsmasq-dns-57c4f6c9f-q6cvm\" (UID: \"87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7\") " pod="openstack/dnsmasq-dns-57c4f6c9f-q6cvm" Jan 30 07:05:03 crc kubenswrapper[4520]: I0130 07:05:03.934400 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7-dns-swift-storage-0\") pod \"dnsmasq-dns-57c4f6c9f-q6cvm\" (UID: \"87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7\") " pod="openstack/dnsmasq-dns-57c4f6c9f-q6cvm" Jan 30 07:05:03 crc kubenswrapper[4520]: I0130 07:05:03.970139 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fmxc\" (UniqueName: \"kubernetes.io/projected/87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7-kube-api-access-5fmxc\") pod \"dnsmasq-dns-57c4f6c9f-q6cvm\" (UID: \"87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7\") " pod="openstack/dnsmasq-dns-57c4f6c9f-q6cvm" Jan 30 07:05:04 crc kubenswrapper[4520]: I0130 07:05:04.036726 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c4f6c9f-q6cvm" Jan 30 07:05:04 crc kubenswrapper[4520]: I0130 07:05:04.100354 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f464775-8fv4z" Jan 30 07:05:04 crc kubenswrapper[4520]: I0130 07:05:04.242091 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c24cc31d-16b3-4859-a413-dbb766b276e2-dns-swift-storage-0\") pod \"c24cc31d-16b3-4859-a413-dbb766b276e2\" (UID: \"c24cc31d-16b3-4859-a413-dbb766b276e2\") " Jan 30 07:05:04 crc kubenswrapper[4520]: I0130 07:05:04.242284 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c24cc31d-16b3-4859-a413-dbb766b276e2-ovsdbserver-sb\") pod \"c24cc31d-16b3-4859-a413-dbb766b276e2\" (UID: \"c24cc31d-16b3-4859-a413-dbb766b276e2\") " Jan 30 07:05:04 crc kubenswrapper[4520]: I0130 07:05:04.242415 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c24cc31d-16b3-4859-a413-dbb766b276e2-config\") pod \"c24cc31d-16b3-4859-a413-dbb766b276e2\" (UID: \"c24cc31d-16b3-4859-a413-dbb766b276e2\") " Jan 30 07:05:04 crc kubenswrapper[4520]: I0130 07:05:04.242455 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bc9d\" (UniqueName: \"kubernetes.io/projected/c24cc31d-16b3-4859-a413-dbb766b276e2-kube-api-access-7bc9d\") pod \"c24cc31d-16b3-4859-a413-dbb766b276e2\" (UID: \"c24cc31d-16b3-4859-a413-dbb766b276e2\") " Jan 30 07:05:04 crc kubenswrapper[4520]: I0130 07:05:04.242601 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c24cc31d-16b3-4859-a413-dbb766b276e2-ovsdbserver-nb\") pod \"c24cc31d-16b3-4859-a413-dbb766b276e2\" (UID: \"c24cc31d-16b3-4859-a413-dbb766b276e2\") " Jan 30 07:05:04 crc kubenswrapper[4520]: I0130 07:05:04.242964 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c24cc31d-16b3-4859-a413-dbb766b276e2-dns-svc\") pod \"c24cc31d-16b3-4859-a413-dbb766b276e2\" (UID: \"c24cc31d-16b3-4859-a413-dbb766b276e2\") " Jan 30 07:05:04 crc kubenswrapper[4520]: I0130 07:05:04.255084 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c24cc31d-16b3-4859-a413-dbb766b276e2-kube-api-access-7bc9d" (OuterVolumeSpecName: "kube-api-access-7bc9d") pod "c24cc31d-16b3-4859-a413-dbb766b276e2" (UID: "c24cc31d-16b3-4859-a413-dbb766b276e2"). InnerVolumeSpecName "kube-api-access-7bc9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:05:04 crc kubenswrapper[4520]: I0130 07:05:04.284441 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c24cc31d-16b3-4859-a413-dbb766b276e2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c24cc31d-16b3-4859-a413-dbb766b276e2" (UID: "c24cc31d-16b3-4859-a413-dbb766b276e2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:05:04 crc kubenswrapper[4520]: I0130 07:05:04.285352 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c24cc31d-16b3-4859-a413-dbb766b276e2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c24cc31d-16b3-4859-a413-dbb766b276e2" (UID: "c24cc31d-16b3-4859-a413-dbb766b276e2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:05:04 crc kubenswrapper[4520]: I0130 07:05:04.301614 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c24cc31d-16b3-4859-a413-dbb766b276e2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c24cc31d-16b3-4859-a413-dbb766b276e2" (UID: "c24cc31d-16b3-4859-a413-dbb766b276e2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:05:04 crc kubenswrapper[4520]: I0130 07:05:04.303930 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c24cc31d-16b3-4859-a413-dbb766b276e2-config" (OuterVolumeSpecName: "config") pod "c24cc31d-16b3-4859-a413-dbb766b276e2" (UID: "c24cc31d-16b3-4859-a413-dbb766b276e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:05:04 crc kubenswrapper[4520]: I0130 07:05:04.312082 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c24cc31d-16b3-4859-a413-dbb766b276e2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c24cc31d-16b3-4859-a413-dbb766b276e2" (UID: "c24cc31d-16b3-4859-a413-dbb766b276e2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:05:04 crc kubenswrapper[4520]: I0130 07:05:04.346413 4520 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c24cc31d-16b3-4859-a413-dbb766b276e2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 07:05:04 crc kubenswrapper[4520]: I0130 07:05:04.346473 4520 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c24cc31d-16b3-4859-a413-dbb766b276e2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 07:05:04 crc kubenswrapper[4520]: I0130 07:05:04.346488 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c24cc31d-16b3-4859-a413-dbb766b276e2-config\") on node \"crc\" DevicePath \"\"" Jan 30 07:05:04 crc kubenswrapper[4520]: I0130 07:05:04.346498 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bc9d\" (UniqueName: \"kubernetes.io/projected/c24cc31d-16b3-4859-a413-dbb766b276e2-kube-api-access-7bc9d\") on node \"crc\" DevicePath \"\"" Jan 30 07:05:04 crc kubenswrapper[4520]: I0130 07:05:04.346508 4520 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c24cc31d-16b3-4859-a413-dbb766b276e2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 07:05:04 crc kubenswrapper[4520]: I0130 07:05:04.346559 4520 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c24cc31d-16b3-4859-a413-dbb766b276e2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 07:05:04 crc kubenswrapper[4520]: I0130 07:05:04.501400 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c4f6c9f-q6cvm"] Jan 30 07:05:04 crc kubenswrapper[4520]: I0130 07:05:04.832614 4520 generic.go:334] "Generic (PLEG): container finished" podID="87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7" containerID="b520306a5e63dcf192fa94104b00fd736f4c6647461c4fe73990c0a5cbd823ca" exitCode=0 Jan 30 07:05:04 crc kubenswrapper[4520]: I0130 07:05:04.832720 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c4f6c9f-q6cvm" event={"ID":"87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7","Type":"ContainerDied","Data":"b520306a5e63dcf192fa94104b00fd736f4c6647461c4fe73990c0a5cbd823ca"} Jan 30 07:05:04 crc kubenswrapper[4520]: I0130 07:05:04.833109 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c4f6c9f-q6cvm" event={"ID":"87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7","Type":"ContainerStarted","Data":"ff0bb8fe2de280ecd478deace08e6e861e27f8e7a4350a760e14370d2e914397"} Jan 30 07:05:04 crc kubenswrapper[4520]: I0130 07:05:04.835792 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f464775-8fv4z" event={"ID":"c24cc31d-16b3-4859-a413-dbb766b276e2","Type":"ContainerDied","Data":"7d30d23535ae5adb298979836d1919bdb9c5e6a73742ce111cbb4fdfdb4a9079"} Jan 30 07:05:04 crc kubenswrapper[4520]: I0130 07:05:04.835869 4520 scope.go:117] "RemoveContainer" containerID="2b197a0fe1610f4ce5f7ecfb357fa5389a403593122ba7d89f9ccb5a7776245b" Jan 30 07:05:04 crc kubenswrapper[4520]: I0130 07:05:04.835910 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f464775-8fv4z" Jan 30 07:05:04 crc kubenswrapper[4520]: I0130 07:05:04.890865 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f464775-8fv4z"] Jan 30 07:05:04 crc kubenswrapper[4520]: I0130 07:05:04.897042 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f464775-8fv4z"] Jan 30 07:05:05 crc kubenswrapper[4520]: I0130 07:05:05.019030 4520 scope.go:117] "RemoveContainer" containerID="c2ee6f3d71320e5fa50ba665a846f444bc686b479b70f89c1f00b6ba24547955" Jan 30 07:05:05 crc kubenswrapper[4520]: I0130 07:05:05.850609 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c4f6c9f-q6cvm" event={"ID":"87b5f8b5-6fd5-41f1-92a1-6c3a56eb82b7","Type":"ContainerStarted","Data":"e6dc95320b7ab41105b61ccefcc350d53fb2f301df2da1677372cdce797711fd"} Jan 30 07:05:05 crc kubenswrapper[4520]: I0130 07:05:05.850843 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57c4f6c9f-q6cvm" Jan 30 07:05:05 crc kubenswrapper[4520]: I0130 07:05:05.882749 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57c4f6c9f-q6cvm" podStartSLOduration=2.8827288859999998 podStartE2EDuration="2.882728886s" podCreationTimestamp="2026-01-30 07:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:05:05.87421678 +0000 UTC m=+1219.502568962" watchObservedRunningTime="2026-01-30 07:05:05.882728886 +0000 UTC m=+1219.511081066" Jan 30 07:05:06 crc kubenswrapper[4520]: I0130 07:05:06.699781 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c24cc31d-16b3-4859-a413-dbb766b276e2" path="/var/lib/kubelet/pods/c24cc31d-16b3-4859-a413-dbb766b276e2/volumes" Jan 30 07:05:14 crc kubenswrapper[4520]: I0130 07:05:14.037729 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57c4f6c9f-q6cvm" Jan 30 07:05:14 crc kubenswrapper[4520]: I0130 07:05:14.106270 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66b775f657-5j6x5"] Jan 30 07:05:14 crc kubenswrapper[4520]: I0130 07:05:14.106487 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-66b775f657-5j6x5" podUID="33482717-70e7-430c-b38e-25d9cfaac08b" containerName="dnsmasq-dns" containerID="cri-o://7d5f4059888a39bdff43a1637c8ab8b18dfa83831f605d0b36c32416ee8c7ace" gracePeriod=10 Jan 30 07:05:14 crc kubenswrapper[4520]: I0130 07:05:14.554138 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66b775f657-5j6x5" Jan 30 07:05:14 crc kubenswrapper[4520]: I0130 07:05:14.686831 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d75fv\" (UniqueName: \"kubernetes.io/projected/33482717-70e7-430c-b38e-25d9cfaac08b-kube-api-access-d75fv\") pod \"33482717-70e7-430c-b38e-25d9cfaac08b\" (UID: \"33482717-70e7-430c-b38e-25d9cfaac08b\") " Jan 30 07:05:14 crc kubenswrapper[4520]: I0130 07:05:14.686876 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33482717-70e7-430c-b38e-25d9cfaac08b-ovsdbserver-sb\") pod \"33482717-70e7-430c-b38e-25d9cfaac08b\" (UID: \"33482717-70e7-430c-b38e-25d9cfaac08b\") " Jan 30 07:05:14 crc kubenswrapper[4520]: I0130 07:05:14.686904 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/33482717-70e7-430c-b38e-25d9cfaac08b-openstack-edpm-ipam\") pod \"33482717-70e7-430c-b38e-25d9cfaac08b\" (UID: \"33482717-70e7-430c-b38e-25d9cfaac08b\") " Jan 30 07:05:14 crc kubenswrapper[4520]: I0130 07:05:14.686947 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33482717-70e7-430c-b38e-25d9cfaac08b-config\") pod \"33482717-70e7-430c-b38e-25d9cfaac08b\" (UID: \"33482717-70e7-430c-b38e-25d9cfaac08b\") " Jan 30 07:05:14 crc kubenswrapper[4520]: I0130 07:05:14.687026 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33482717-70e7-430c-b38e-25d9cfaac08b-ovsdbserver-nb\") pod \"33482717-70e7-430c-b38e-25d9cfaac08b\" (UID: \"33482717-70e7-430c-b38e-25d9cfaac08b\") " Jan 30 07:05:14 crc kubenswrapper[4520]: I0130 07:05:14.687078 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33482717-70e7-430c-b38e-25d9cfaac08b-dns-swift-storage-0\") pod \"33482717-70e7-430c-b38e-25d9cfaac08b\" (UID: \"33482717-70e7-430c-b38e-25d9cfaac08b\") " Jan 30 07:05:14 crc kubenswrapper[4520]: I0130 07:05:14.687100 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33482717-70e7-430c-b38e-25d9cfaac08b-dns-svc\") pod \"33482717-70e7-430c-b38e-25d9cfaac08b\" (UID: \"33482717-70e7-430c-b38e-25d9cfaac08b\") " Jan 30 07:05:14 crc kubenswrapper[4520]: I0130 07:05:14.695052 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33482717-70e7-430c-b38e-25d9cfaac08b-kube-api-access-d75fv" (OuterVolumeSpecName: "kube-api-access-d75fv") pod "33482717-70e7-430c-b38e-25d9cfaac08b" (UID: "33482717-70e7-430c-b38e-25d9cfaac08b"). InnerVolumeSpecName "kube-api-access-d75fv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:05:14 crc kubenswrapper[4520]: I0130 07:05:14.733912 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33482717-70e7-430c-b38e-25d9cfaac08b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "33482717-70e7-430c-b38e-25d9cfaac08b" (UID: "33482717-70e7-430c-b38e-25d9cfaac08b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:05:14 crc kubenswrapper[4520]: I0130 07:05:14.739018 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33482717-70e7-430c-b38e-25d9cfaac08b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "33482717-70e7-430c-b38e-25d9cfaac08b" (UID: "33482717-70e7-430c-b38e-25d9cfaac08b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:05:14 crc kubenswrapper[4520]: I0130 07:05:14.749164 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33482717-70e7-430c-b38e-25d9cfaac08b-config" (OuterVolumeSpecName: "config") pod "33482717-70e7-430c-b38e-25d9cfaac08b" (UID: "33482717-70e7-430c-b38e-25d9cfaac08b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:05:14 crc kubenswrapper[4520]: I0130 07:05:14.749771 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33482717-70e7-430c-b38e-25d9cfaac08b-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "33482717-70e7-430c-b38e-25d9cfaac08b" (UID: "33482717-70e7-430c-b38e-25d9cfaac08b"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:05:14 crc kubenswrapper[4520]: I0130 07:05:14.753882 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33482717-70e7-430c-b38e-25d9cfaac08b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "33482717-70e7-430c-b38e-25d9cfaac08b" (UID: "33482717-70e7-430c-b38e-25d9cfaac08b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:05:14 crc kubenswrapper[4520]: I0130 07:05:14.757692 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33482717-70e7-430c-b38e-25d9cfaac08b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "33482717-70e7-430c-b38e-25d9cfaac08b" (UID: "33482717-70e7-430c-b38e-25d9cfaac08b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:05:14 crc kubenswrapper[4520]: I0130 07:05:14.790773 4520 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/33482717-70e7-430c-b38e-25d9cfaac08b-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 07:05:14 crc kubenswrapper[4520]: I0130 07:05:14.790799 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33482717-70e7-430c-b38e-25d9cfaac08b-config\") on node \"crc\" DevicePath \"\"" Jan 30 07:05:14 crc kubenswrapper[4520]: I0130 07:05:14.790809 4520 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33482717-70e7-430c-b38e-25d9cfaac08b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 07:05:14 crc kubenswrapper[4520]: I0130 07:05:14.790819 4520 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33482717-70e7-430c-b38e-25d9cfaac08b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 07:05:14 crc kubenswrapper[4520]: I0130 07:05:14.790827 4520 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33482717-70e7-430c-b38e-25d9cfaac08b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 07:05:14 crc kubenswrapper[4520]: I0130 07:05:14.790836 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d75fv\" (UniqueName: \"kubernetes.io/projected/33482717-70e7-430c-b38e-25d9cfaac08b-kube-api-access-d75fv\") on node \"crc\" DevicePath \"\"" Jan 30 07:05:14 crc kubenswrapper[4520]: I0130 07:05:14.790844 4520 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33482717-70e7-430c-b38e-25d9cfaac08b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 07:05:14 crc kubenswrapper[4520]: I0130 07:05:14.949765 4520 generic.go:334] "Generic (PLEG): container finished" podID="33482717-70e7-430c-b38e-25d9cfaac08b" containerID="7d5f4059888a39bdff43a1637c8ab8b18dfa83831f605d0b36c32416ee8c7ace" exitCode=0 Jan 30 07:05:14 crc kubenswrapper[4520]: I0130 07:05:14.949817 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66b775f657-5j6x5" event={"ID":"33482717-70e7-430c-b38e-25d9cfaac08b","Type":"ContainerDied","Data":"7d5f4059888a39bdff43a1637c8ab8b18dfa83831f605d0b36c32416ee8c7ace"} Jan 30 07:05:14 crc kubenswrapper[4520]: I0130 07:05:14.949840 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66b775f657-5j6x5" Jan 30 07:05:14 crc kubenswrapper[4520]: I0130 07:05:14.949858 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66b775f657-5j6x5" event={"ID":"33482717-70e7-430c-b38e-25d9cfaac08b","Type":"ContainerDied","Data":"062b41fc406d24feb122f6ad5c18af27766adff4569c93d9c781e230162e0f9d"} Jan 30 07:05:14 crc kubenswrapper[4520]: I0130 07:05:14.949882 4520 scope.go:117] "RemoveContainer" containerID="7d5f4059888a39bdff43a1637c8ab8b18dfa83831f605d0b36c32416ee8c7ace" Jan 30 07:05:14 crc kubenswrapper[4520]: I0130 07:05:14.968420 4520 scope.go:117] "RemoveContainer" containerID="df4f5bf4dcaab6e7195db0d2eb26e05573320bf82976e0554317cfcfc986bc73" Jan 30 07:05:14 crc kubenswrapper[4520]: I0130 07:05:14.993361 4520 scope.go:117] "RemoveContainer" containerID="7d5f4059888a39bdff43a1637c8ab8b18dfa83831f605d0b36c32416ee8c7ace" Jan 30 07:05:14 crc kubenswrapper[4520]: E0130 07:05:14.993790 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d5f4059888a39bdff43a1637c8ab8b18dfa83831f605d0b36c32416ee8c7ace\": container with ID starting with 7d5f4059888a39bdff43a1637c8ab8b18dfa83831f605d0b36c32416ee8c7ace not found: ID does not exist" containerID="7d5f4059888a39bdff43a1637c8ab8b18dfa83831f605d0b36c32416ee8c7ace" Jan 30 07:05:14 crc kubenswrapper[4520]: I0130 07:05:14.993837 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d5f4059888a39bdff43a1637c8ab8b18dfa83831f605d0b36c32416ee8c7ace"} err="failed to get container status \"7d5f4059888a39bdff43a1637c8ab8b18dfa83831f605d0b36c32416ee8c7ace\": rpc error: code = NotFound desc = could not find container \"7d5f4059888a39bdff43a1637c8ab8b18dfa83831f605d0b36c32416ee8c7ace\": container with ID starting with 7d5f4059888a39bdff43a1637c8ab8b18dfa83831f605d0b36c32416ee8c7ace not found: ID does not exist" Jan 30 07:05:14 crc kubenswrapper[4520]: I0130 07:05:14.993866 4520 scope.go:117] "RemoveContainer" containerID="df4f5bf4dcaab6e7195db0d2eb26e05573320bf82976e0554317cfcfc986bc73" Jan 30 07:05:14 crc kubenswrapper[4520]: E0130 07:05:14.994287 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df4f5bf4dcaab6e7195db0d2eb26e05573320bf82976e0554317cfcfc986bc73\": container with ID starting with df4f5bf4dcaab6e7195db0d2eb26e05573320bf82976e0554317cfcfc986bc73 not found: ID does not exist" containerID="df4f5bf4dcaab6e7195db0d2eb26e05573320bf82976e0554317cfcfc986bc73" Jan 30 07:05:14 crc kubenswrapper[4520]: I0130 07:05:14.994312 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df4f5bf4dcaab6e7195db0d2eb26e05573320bf82976e0554317cfcfc986bc73"} err="failed to get container status \"df4f5bf4dcaab6e7195db0d2eb26e05573320bf82976e0554317cfcfc986bc73\": rpc error: code = NotFound desc = could not find container \"df4f5bf4dcaab6e7195db0d2eb26e05573320bf82976e0554317cfcfc986bc73\": container with ID starting with df4f5bf4dcaab6e7195db0d2eb26e05573320bf82976e0554317cfcfc986bc73 not found: ID does not exist" Jan 30 07:05:14 crc kubenswrapper[4520]: I0130 07:05:14.997550 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66b775f657-5j6x5"] Jan 30 07:05:15 crc kubenswrapper[4520]: I0130 07:05:15.004772 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66b775f657-5j6x5"] Jan 30 07:05:16 crc kubenswrapper[4520]: I0130 07:05:16.696631 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33482717-70e7-430c-b38e-25d9cfaac08b" path="/var/lib/kubelet/pods/33482717-70e7-430c-b38e-25d9cfaac08b/volumes" Jan 30 07:05:27 crc kubenswrapper[4520]: I0130 07:05:27.793949 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 07:05:27 crc kubenswrapper[4520]: I0130 07:05:27.794634 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 07:05:29 crc kubenswrapper[4520]: I0130 07:05:29.101861 4520 generic.go:334] "Generic (PLEG): container finished" podID="6de9b0a4-9862-4be2-8d01-aba25647bf18" containerID="94b431ee0406c629dc9aae44a9e153e06a794e6cb5bbffefec79eba4b86acd66" exitCode=0 Jan 30 07:05:29 crc kubenswrapper[4520]: I0130 07:05:29.102395 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6de9b0a4-9862-4be2-8d01-aba25647bf18","Type":"ContainerDied","Data":"94b431ee0406c629dc9aae44a9e153e06a794e6cb5bbffefec79eba4b86acd66"} Jan 30 07:05:29 crc kubenswrapper[4520]: I0130 07:05:29.105596 4520 generic.go:334] "Generic (PLEG): container finished" podID="59975299-c1c9-4ccd-a067-964e4644bc92" containerID="fa8432e4b7663a2c9b18efb43fb4ef8bd76809715123db77ad906131f581cf61" exitCode=0 Jan 30 07:05:29 crc kubenswrapper[4520]: I0130 07:05:29.106489 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"59975299-c1c9-4ccd-a067-964e4644bc92","Type":"ContainerDied","Data":"fa8432e4b7663a2c9b18efb43fb4ef8bd76809715123db77ad906131f581cf61"} Jan 30 07:05:30 crc kubenswrapper[4520]: I0130 07:05:30.118952 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"59975299-c1c9-4ccd-a067-964e4644bc92","Type":"ContainerStarted","Data":"29e7c143e6cfadc90cc3febb5b158d6b53a24146808026c346388eeb0f5f356e"} Jan 30 07:05:30 crc kubenswrapper[4520]: I0130 07:05:30.119646 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 07:05:30 crc kubenswrapper[4520]: I0130 07:05:30.122166 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6de9b0a4-9862-4be2-8d01-aba25647bf18","Type":"ContainerStarted","Data":"aa54bdf05597bc1216cf975cf5d7d98e9485f279bd8e2a50c83f08b5ac36949a"} Jan 30 07:05:30 crc kubenswrapper[4520]: I0130 07:05:30.122417 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:05:30 crc kubenswrapper[4520]: I0130 07:05:30.151783 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.151767714 podStartE2EDuration="36.151767714s" podCreationTimestamp="2026-01-30 07:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:05:30.14384645 +0000 UTC m=+1243.772198632" watchObservedRunningTime="2026-01-30 07:05:30.151767714 +0000 UTC m=+1243.780119895" Jan 30 07:05:30 crc kubenswrapper[4520]: I0130 07:05:30.166971 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.166942416 podStartE2EDuration="36.166942416s" podCreationTimestamp="2026-01-30 07:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:05:30.164862255 +0000 UTC m=+1243.793214436" watchObservedRunningTime="2026-01-30 07:05:30.166942416 +0000 UTC m=+1243.795294597" Jan 30 07:05:32 crc kubenswrapper[4520]: I0130 07:05:32.203086 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rdr2c"] Jan 30 07:05:32 crc kubenswrapper[4520]: E0130 07:05:32.203564 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c24cc31d-16b3-4859-a413-dbb766b276e2" containerName="dnsmasq-dns" Jan 30 07:05:32 crc kubenswrapper[4520]: I0130 07:05:32.203579 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="c24cc31d-16b3-4859-a413-dbb766b276e2" containerName="dnsmasq-dns" Jan 30 07:05:32 crc kubenswrapper[4520]: E0130 07:05:32.203602 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33482717-70e7-430c-b38e-25d9cfaac08b" containerName="dnsmasq-dns" Jan 30 07:05:32 crc kubenswrapper[4520]: I0130 07:05:32.203608 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="33482717-70e7-430c-b38e-25d9cfaac08b" containerName="dnsmasq-dns" Jan 30 07:05:32 crc kubenswrapper[4520]: E0130 07:05:32.203624 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c24cc31d-16b3-4859-a413-dbb766b276e2" containerName="init" Jan 30 07:05:32 crc kubenswrapper[4520]: I0130 07:05:32.203630 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="c24cc31d-16b3-4859-a413-dbb766b276e2" containerName="init" Jan 30 07:05:32 crc kubenswrapper[4520]: E0130 07:05:32.203651 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33482717-70e7-430c-b38e-25d9cfaac08b" containerName="init" Jan 30 07:05:32 crc kubenswrapper[4520]: I0130 07:05:32.203666 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="33482717-70e7-430c-b38e-25d9cfaac08b" containerName="init" Jan 30 07:05:32 crc kubenswrapper[4520]: I0130 07:05:32.203863 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="c24cc31d-16b3-4859-a413-dbb766b276e2" containerName="dnsmasq-dns" Jan 30 07:05:32 crc kubenswrapper[4520]: I0130 07:05:32.203876 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="33482717-70e7-430c-b38e-25d9cfaac08b" containerName="dnsmasq-dns" Jan 30 07:05:32 crc kubenswrapper[4520]: I0130 07:05:32.204702 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rdr2c" Jan 30 07:05:32 crc kubenswrapper[4520]: I0130 07:05:32.209505 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 07:05:32 crc kubenswrapper[4520]: I0130 07:05:32.210452 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 07:05:32 crc kubenswrapper[4520]: I0130 07:05:32.213473 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r7s58" Jan 30 07:05:32 crc kubenswrapper[4520]: I0130 07:05:32.215289 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 07:05:32 crc kubenswrapper[4520]: I0130 07:05:32.242437 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rdr2c"] Jan 30 07:05:32 crc kubenswrapper[4520]: I0130 07:05:32.386005 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvjxg\" (UniqueName: \"kubernetes.io/projected/0f9c7c64-fbc4-4d01-95e8-36e6aa941610-kube-api-access-pvjxg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rdr2c\" (UID: \"0f9c7c64-fbc4-4d01-95e8-36e6aa941610\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rdr2c" Jan 30 07:05:32 crc kubenswrapper[4520]: I0130 07:05:32.386347 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f9c7c64-fbc4-4d01-95e8-36e6aa941610-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rdr2c\" (UID: \"0f9c7c64-fbc4-4d01-95e8-36e6aa941610\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rdr2c" Jan 30 07:05:32 crc kubenswrapper[4520]: I0130 07:05:32.386537 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f9c7c64-fbc4-4d01-95e8-36e6aa941610-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rdr2c\" (UID: \"0f9c7c64-fbc4-4d01-95e8-36e6aa941610\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rdr2c" Jan 30 07:05:32 crc kubenswrapper[4520]: I0130 07:05:32.386653 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f9c7c64-fbc4-4d01-95e8-36e6aa941610-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rdr2c\" (UID: \"0f9c7c64-fbc4-4d01-95e8-36e6aa941610\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rdr2c" Jan 30 07:05:32 crc kubenswrapper[4520]: I0130 07:05:32.490078 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f9c7c64-fbc4-4d01-95e8-36e6aa941610-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rdr2c\" (UID: \"0f9c7c64-fbc4-4d01-95e8-36e6aa941610\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rdr2c" Jan 30 07:05:32 crc kubenswrapper[4520]: I0130 07:05:32.490203 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f9c7c64-fbc4-4d01-95e8-36e6aa941610-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rdr2c\" (UID: \"0f9c7c64-fbc4-4d01-95e8-36e6aa941610\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rdr2c" Jan 30 07:05:32 crc kubenswrapper[4520]: I0130 07:05:32.490263 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f9c7c64-fbc4-4d01-95e8-36e6aa941610-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rdr2c\" (UID: \"0f9c7c64-fbc4-4d01-95e8-36e6aa941610\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rdr2c" Jan 30 07:05:32 crc kubenswrapper[4520]: I0130 07:05:32.490331 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvjxg\" (UniqueName: \"kubernetes.io/projected/0f9c7c64-fbc4-4d01-95e8-36e6aa941610-kube-api-access-pvjxg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rdr2c\" (UID: \"0f9c7c64-fbc4-4d01-95e8-36e6aa941610\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rdr2c" Jan 30 07:05:32 crc kubenswrapper[4520]: I0130 07:05:32.498397 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f9c7c64-fbc4-4d01-95e8-36e6aa941610-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rdr2c\" (UID: \"0f9c7c64-fbc4-4d01-95e8-36e6aa941610\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rdr2c" Jan 30 07:05:32 crc kubenswrapper[4520]: I0130 07:05:32.499164 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f9c7c64-fbc4-4d01-95e8-36e6aa941610-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rdr2c\" (UID: \"0f9c7c64-fbc4-4d01-95e8-36e6aa941610\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rdr2c" Jan 30 07:05:32 crc kubenswrapper[4520]: I0130 07:05:32.505886 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f9c7c64-fbc4-4d01-95e8-36e6aa941610-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rdr2c\" (UID: \"0f9c7c64-fbc4-4d01-95e8-36e6aa941610\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rdr2c" Jan 30 07:05:32 crc kubenswrapper[4520]: I0130 07:05:32.506270 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvjxg\" (UniqueName: \"kubernetes.io/projected/0f9c7c64-fbc4-4d01-95e8-36e6aa941610-kube-api-access-pvjxg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rdr2c\" (UID: \"0f9c7c64-fbc4-4d01-95e8-36e6aa941610\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rdr2c" Jan 30 07:05:32 crc kubenswrapper[4520]: I0130 07:05:32.529145 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rdr2c" Jan 30 07:05:33 crc kubenswrapper[4520]: I0130 07:05:33.301695 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rdr2c"] Jan 30 07:05:34 crc kubenswrapper[4520]: I0130 07:05:34.163393 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rdr2c" event={"ID":"0f9c7c64-fbc4-4d01-95e8-36e6aa941610","Type":"ContainerStarted","Data":"870f5588a8f8e996ff93a6541f7fe2ce24909cb764238c0d5c5a15223e711bb8"} Jan 30 07:05:44 crc kubenswrapper[4520]: I0130 07:05:44.286237 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rdr2c" event={"ID":"0f9c7c64-fbc4-4d01-95e8-36e6aa941610","Type":"ContainerStarted","Data":"97ea0ea0b2b007a91eee5ea8a87f5e9a1091c0d424c8df5ca7034afe33dddec9"} Jan 30 07:05:44 crc kubenswrapper[4520]: I0130 07:05:44.306034 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rdr2c" podStartSLOduration=1.8641182920000001 podStartE2EDuration="12.306017034s" podCreationTimestamp="2026-01-30 07:05:32 +0000 UTC" firstStartedPulling="2026-01-30 07:05:33.304965265 +0000 UTC m=+1246.933317446" lastFinishedPulling="2026-01-30 07:05:43.746864007 +0000 UTC m=+1257.375216188" observedRunningTime="2026-01-30 07:05:44.305290677 +0000 UTC m=+1257.933642859" watchObservedRunningTime="2026-01-30 07:05:44.306017034 +0000 UTC m=+1257.934369215" Jan 30 07:05:45 crc kubenswrapper[4520]: I0130 07:05:45.219695 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 07:05:45 crc kubenswrapper[4520]: I0130 07:05:45.253747 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 07:05:54 crc kubenswrapper[4520]: I0130 07:05:54.367172 4520 scope.go:117] "RemoveContainer" containerID="9d869e019e8f61d5527214121ef9e7ee2f1b3c059151bd1981be27b48d7dd44f" Jan 30 07:05:54 crc kubenswrapper[4520]: I0130 07:05:54.403219 4520 scope.go:117] "RemoveContainer" containerID="6f894844b125b048d0fffa56897834c5a61f59e0f74641ff50fef0a6621f13f3" Jan 30 07:05:54 crc kubenswrapper[4520]: I0130 07:05:54.445441 4520 scope.go:117] "RemoveContainer" containerID="65a23ad6f1cdb4d68c5c822bfed69a5e5a265bf7c3e949038e74887365524361" Jan 30 07:05:55 crc kubenswrapper[4520]: I0130 07:05:55.393920 4520 generic.go:334] "Generic (PLEG): container finished" podID="0f9c7c64-fbc4-4d01-95e8-36e6aa941610" containerID="97ea0ea0b2b007a91eee5ea8a87f5e9a1091c0d424c8df5ca7034afe33dddec9" exitCode=0 Jan 30 07:05:55 crc kubenswrapper[4520]: I0130 07:05:55.394012 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rdr2c" event={"ID":"0f9c7c64-fbc4-4d01-95e8-36e6aa941610","Type":"ContainerDied","Data":"97ea0ea0b2b007a91eee5ea8a87f5e9a1091c0d424c8df5ca7034afe33dddec9"} Jan 30 07:05:56 crc kubenswrapper[4520]: I0130 07:05:56.887484 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rdr2c" Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.092166 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f9c7c64-fbc4-4d01-95e8-36e6aa941610-repo-setup-combined-ca-bundle\") pod \"0f9c7c64-fbc4-4d01-95e8-36e6aa941610\" (UID: \"0f9c7c64-fbc4-4d01-95e8-36e6aa941610\") " Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.092227 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f9c7c64-fbc4-4d01-95e8-36e6aa941610-ssh-key-openstack-edpm-ipam\") pod \"0f9c7c64-fbc4-4d01-95e8-36e6aa941610\" (UID: \"0f9c7c64-fbc4-4d01-95e8-36e6aa941610\") " Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.092398 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f9c7c64-fbc4-4d01-95e8-36e6aa941610-inventory\") pod \"0f9c7c64-fbc4-4d01-95e8-36e6aa941610\" (UID: \"0f9c7c64-fbc4-4d01-95e8-36e6aa941610\") " Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.092447 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvjxg\" (UniqueName: \"kubernetes.io/projected/0f9c7c64-fbc4-4d01-95e8-36e6aa941610-kube-api-access-pvjxg\") pod \"0f9c7c64-fbc4-4d01-95e8-36e6aa941610\" (UID: \"0f9c7c64-fbc4-4d01-95e8-36e6aa941610\") " Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.113431 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f9c7c64-fbc4-4d01-95e8-36e6aa941610-kube-api-access-pvjxg" (OuterVolumeSpecName: "kube-api-access-pvjxg") pod "0f9c7c64-fbc4-4d01-95e8-36e6aa941610" (UID: "0f9c7c64-fbc4-4d01-95e8-36e6aa941610"). InnerVolumeSpecName "kube-api-access-pvjxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.119740 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f9c7c64-fbc4-4d01-95e8-36e6aa941610-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "0f9c7c64-fbc4-4d01-95e8-36e6aa941610" (UID: "0f9c7c64-fbc4-4d01-95e8-36e6aa941610"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.130760 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f9c7c64-fbc4-4d01-95e8-36e6aa941610-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0f9c7c64-fbc4-4d01-95e8-36e6aa941610" (UID: "0f9c7c64-fbc4-4d01-95e8-36e6aa941610"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.142877 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f9c7c64-fbc4-4d01-95e8-36e6aa941610-inventory" (OuterVolumeSpecName: "inventory") pod "0f9c7c64-fbc4-4d01-95e8-36e6aa941610" (UID: "0f9c7c64-fbc4-4d01-95e8-36e6aa941610"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.197563 4520 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f9c7c64-fbc4-4d01-95e8-36e6aa941610-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.197716 4520 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f9c7c64-fbc4-4d01-95e8-36e6aa941610-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.197801 4520 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f9c7c64-fbc4-4d01-95e8-36e6aa941610-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.197881 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvjxg\" (UniqueName: \"kubernetes.io/projected/0f9c7c64-fbc4-4d01-95e8-36e6aa941610-kube-api-access-pvjxg\") on node \"crc\" DevicePath \"\"" Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.415991 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rdr2c" event={"ID":"0f9c7c64-fbc4-4d01-95e8-36e6aa941610","Type":"ContainerDied","Data":"870f5588a8f8e996ff93a6541f7fe2ce24909cb764238c0d5c5a15223e711bb8"} Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.416044 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="870f5588a8f8e996ff93a6541f7fe2ce24909cb764238c0d5c5a15223e711bb8" Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.416429 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rdr2c" Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.483079 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-mxl7v"] Jan 30 07:05:57 crc kubenswrapper[4520]: E0130 07:05:57.483501 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f9c7c64-fbc4-4d01-95e8-36e6aa941610" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.483539 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f9c7c64-fbc4-4d01-95e8-36e6aa941610" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.483775 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f9c7c64-fbc4-4d01-95e8-36e6aa941610" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.484398 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mxl7v" Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.486165 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r7s58" Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.486184 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.488469 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.489982 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.500348 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-mxl7v"] Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.608946 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ace68fe8-2440-4985-bf81-e6b44a24c55a-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mxl7v\" (UID: \"ace68fe8-2440-4985-bf81-e6b44a24c55a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mxl7v" Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.609286 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ace68fe8-2440-4985-bf81-e6b44a24c55a-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mxl7v\" (UID: \"ace68fe8-2440-4985-bf81-e6b44a24c55a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mxl7v" Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.609397 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz8p8\" (UniqueName: \"kubernetes.io/projected/ace68fe8-2440-4985-bf81-e6b44a24c55a-kube-api-access-tz8p8\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mxl7v\" (UID: \"ace68fe8-2440-4985-bf81-e6b44a24c55a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mxl7v" Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.710943 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ace68fe8-2440-4985-bf81-e6b44a24c55a-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mxl7v\" (UID: \"ace68fe8-2440-4985-bf81-e6b44a24c55a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mxl7v" Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.711000 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ace68fe8-2440-4985-bf81-e6b44a24c55a-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mxl7v\" (UID: \"ace68fe8-2440-4985-bf81-e6b44a24c55a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mxl7v" Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.711037 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz8p8\" (UniqueName: \"kubernetes.io/projected/ace68fe8-2440-4985-bf81-e6b44a24c55a-kube-api-access-tz8p8\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mxl7v\" (UID: \"ace68fe8-2440-4985-bf81-e6b44a24c55a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mxl7v" Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.716027 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ace68fe8-2440-4985-bf81-e6b44a24c55a-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mxl7v\" (UID: \"ace68fe8-2440-4985-bf81-e6b44a24c55a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mxl7v" Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.720955 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ace68fe8-2440-4985-bf81-e6b44a24c55a-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mxl7v\" (UID: \"ace68fe8-2440-4985-bf81-e6b44a24c55a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mxl7v" Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.725092 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz8p8\" (UniqueName: \"kubernetes.io/projected/ace68fe8-2440-4985-bf81-e6b44a24c55a-kube-api-access-tz8p8\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mxl7v\" (UID: \"ace68fe8-2440-4985-bf81-e6b44a24c55a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mxl7v" Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.793816 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.793948 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.794071 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.794726 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3900247724d53d5b578bd24f8556d2a19d19cf1623714d7b59ae28dee17ff16f"} pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.794859 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" containerID="cri-o://3900247724d53d5b578bd24f8556d2a19d19cf1623714d7b59ae28dee17ff16f" gracePeriod=600 Jan 30 07:05:57 crc kubenswrapper[4520]: I0130 07:05:57.797173 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mxl7v" Jan 30 07:05:58 crc kubenswrapper[4520]: E0130 07:05:58.020318 4520 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5f51275_c0b1_4467_bf4a_ef848e3521df.slice/crio-3900247724d53d5b578bd24f8556d2a19d19cf1623714d7b59ae28dee17ff16f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5f51275_c0b1_4467_bf4a_ef848e3521df.slice/crio-conmon-3900247724d53d5b578bd24f8556d2a19d19cf1623714d7b59ae28dee17ff16f.scope\": RecentStats: unable to find data in memory cache]" Jan 30 07:05:58 crc kubenswrapper[4520]: I0130 07:05:58.366784 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-mxl7v"] Jan 30 07:05:58 crc kubenswrapper[4520]: I0130 07:05:58.429969 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mxl7v" event={"ID":"ace68fe8-2440-4985-bf81-e6b44a24c55a","Type":"ContainerStarted","Data":"c8c6ab308365fb604413d8daa7b0260a5a54713f866b10f060f97d3a4a72a950"} Jan 30 07:05:58 crc kubenswrapper[4520]: I0130 07:05:58.433409 4520 generic.go:334] "Generic (PLEG): container finished" podID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerID="3900247724d53d5b578bd24f8556d2a19d19cf1623714d7b59ae28dee17ff16f" exitCode=0 Jan 30 07:05:58 crc kubenswrapper[4520]: I0130 07:05:58.433495 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerDied","Data":"3900247724d53d5b578bd24f8556d2a19d19cf1623714d7b59ae28dee17ff16f"} Jan 30 07:05:58 crc kubenswrapper[4520]: I0130 07:05:58.433576 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerStarted","Data":"b00f4ab612613c1cf7c10de4b942ca02f5ce93773b4911ac63542d5a5740888c"} Jan 30 07:05:58 crc kubenswrapper[4520]: I0130 07:05:58.433601 4520 scope.go:117] "RemoveContainer" containerID="00188edbc7a901128a316b70d44312dd0aa78297ee86dd9a3630c6ec14392173" Jan 30 07:05:59 crc kubenswrapper[4520]: I0130 07:05:59.446098 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mxl7v" event={"ID":"ace68fe8-2440-4985-bf81-e6b44a24c55a","Type":"ContainerStarted","Data":"1aec7807ff45da2be26add586785afe112e3b4e4a51d3154729a6b10a120cf70"} Jan 30 07:05:59 crc kubenswrapper[4520]: I0130 07:05:59.473884 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mxl7v" podStartSLOduration=1.9705145160000002 podStartE2EDuration="2.473868416s" podCreationTimestamp="2026-01-30 07:05:57 +0000 UTC" firstStartedPulling="2026-01-30 07:05:58.375694976 +0000 UTC m=+1272.004047157" lastFinishedPulling="2026-01-30 07:05:58.879048876 +0000 UTC m=+1272.507401057" observedRunningTime="2026-01-30 07:05:59.46340447 +0000 UTC m=+1273.091756651" watchObservedRunningTime="2026-01-30 07:05:59.473868416 +0000 UTC m=+1273.102220596" Jan 30 07:06:01 crc kubenswrapper[4520]: I0130 07:06:01.471400 4520 generic.go:334] "Generic (PLEG): container finished" podID="ace68fe8-2440-4985-bf81-e6b44a24c55a" containerID="1aec7807ff45da2be26add586785afe112e3b4e4a51d3154729a6b10a120cf70" exitCode=0 Jan 30 07:06:01 crc kubenswrapper[4520]: I0130 07:06:01.471480 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mxl7v" event={"ID":"ace68fe8-2440-4985-bf81-e6b44a24c55a","Type":"ContainerDied","Data":"1aec7807ff45da2be26add586785afe112e3b4e4a51d3154729a6b10a120cf70"} Jan 30 07:06:02 crc kubenswrapper[4520]: I0130 07:06:02.871802 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mxl7v" Jan 30 07:06:02 crc kubenswrapper[4520]: I0130 07:06:02.953565 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ace68fe8-2440-4985-bf81-e6b44a24c55a-ssh-key-openstack-edpm-ipam\") pod \"ace68fe8-2440-4985-bf81-e6b44a24c55a\" (UID: \"ace68fe8-2440-4985-bf81-e6b44a24c55a\") " Jan 30 07:06:02 crc kubenswrapper[4520]: I0130 07:06:02.954066 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ace68fe8-2440-4985-bf81-e6b44a24c55a-inventory\") pod \"ace68fe8-2440-4985-bf81-e6b44a24c55a\" (UID: \"ace68fe8-2440-4985-bf81-e6b44a24c55a\") " Jan 30 07:06:02 crc kubenswrapper[4520]: I0130 07:06:02.954182 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tz8p8\" (UniqueName: \"kubernetes.io/projected/ace68fe8-2440-4985-bf81-e6b44a24c55a-kube-api-access-tz8p8\") pod \"ace68fe8-2440-4985-bf81-e6b44a24c55a\" (UID: \"ace68fe8-2440-4985-bf81-e6b44a24c55a\") " Jan 30 07:06:02 crc kubenswrapper[4520]: I0130 07:06:02.960103 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ace68fe8-2440-4985-bf81-e6b44a24c55a-kube-api-access-tz8p8" (OuterVolumeSpecName: "kube-api-access-tz8p8") pod "ace68fe8-2440-4985-bf81-e6b44a24c55a" (UID: "ace68fe8-2440-4985-bf81-e6b44a24c55a"). InnerVolumeSpecName "kube-api-access-tz8p8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:06:02 crc kubenswrapper[4520]: I0130 07:06:02.982235 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ace68fe8-2440-4985-bf81-e6b44a24c55a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ace68fe8-2440-4985-bf81-e6b44a24c55a" (UID: "ace68fe8-2440-4985-bf81-e6b44a24c55a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:06:02 crc kubenswrapper[4520]: I0130 07:06:02.982709 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ace68fe8-2440-4985-bf81-e6b44a24c55a-inventory" (OuterVolumeSpecName: "inventory") pod "ace68fe8-2440-4985-bf81-e6b44a24c55a" (UID: "ace68fe8-2440-4985-bf81-e6b44a24c55a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:06:03 crc kubenswrapper[4520]: I0130 07:06:03.057136 4520 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ace68fe8-2440-4985-bf81-e6b44a24c55a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 07:06:03 crc kubenswrapper[4520]: I0130 07:06:03.057176 4520 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ace68fe8-2440-4985-bf81-e6b44a24c55a-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 07:06:03 crc kubenswrapper[4520]: I0130 07:06:03.057189 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tz8p8\" (UniqueName: \"kubernetes.io/projected/ace68fe8-2440-4985-bf81-e6b44a24c55a-kube-api-access-tz8p8\") on node \"crc\" DevicePath \"\"" Jan 30 07:06:03 crc kubenswrapper[4520]: I0130 07:06:03.496133 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mxl7v" event={"ID":"ace68fe8-2440-4985-bf81-e6b44a24c55a","Type":"ContainerDied","Data":"c8c6ab308365fb604413d8daa7b0260a5a54713f866b10f060f97d3a4a72a950"} Jan 30 07:06:03 crc kubenswrapper[4520]: I0130 07:06:03.496478 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8c6ab308365fb604413d8daa7b0260a5a54713f866b10f060f97d3a4a72a950" Jan 30 07:06:03 crc kubenswrapper[4520]: I0130 07:06:03.496202 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mxl7v" Jan 30 07:06:03 crc kubenswrapper[4520]: I0130 07:06:03.553812 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d8h58"] Jan 30 07:06:03 crc kubenswrapper[4520]: E0130 07:06:03.554238 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ace68fe8-2440-4985-bf81-e6b44a24c55a" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 30 07:06:03 crc kubenswrapper[4520]: I0130 07:06:03.554257 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace68fe8-2440-4985-bf81-e6b44a24c55a" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 30 07:06:03 crc kubenswrapper[4520]: I0130 07:06:03.554422 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="ace68fe8-2440-4985-bf81-e6b44a24c55a" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 30 07:06:03 crc kubenswrapper[4520]: I0130 07:06:03.555071 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d8h58" Jan 30 07:06:03 crc kubenswrapper[4520]: I0130 07:06:03.558806 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 07:06:03 crc kubenswrapper[4520]: I0130 07:06:03.559002 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 07:06:03 crc kubenswrapper[4520]: I0130 07:06:03.559166 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 07:06:03 crc kubenswrapper[4520]: I0130 07:06:03.565879 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e5f032fc-b82e-4882-ac68-186ccb98af34-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d8h58\" (UID: \"e5f032fc-b82e-4882-ac68-186ccb98af34\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d8h58" Jan 30 07:06:03 crc kubenswrapper[4520]: I0130 07:06:03.565930 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqdn4\" (UniqueName: \"kubernetes.io/projected/e5f032fc-b82e-4882-ac68-186ccb98af34-kube-api-access-rqdn4\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d8h58\" (UID: \"e5f032fc-b82e-4882-ac68-186ccb98af34\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d8h58" Jan 30 07:06:03 crc kubenswrapper[4520]: I0130 07:06:03.565973 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5f032fc-b82e-4882-ac68-186ccb98af34-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d8h58\" (UID: \"e5f032fc-b82e-4882-ac68-186ccb98af34\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d8h58" Jan 30 07:06:03 crc kubenswrapper[4520]: I0130 07:06:03.566094 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f032fc-b82e-4882-ac68-186ccb98af34-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d8h58\" (UID: \"e5f032fc-b82e-4882-ac68-186ccb98af34\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d8h58" Jan 30 07:06:03 crc kubenswrapper[4520]: I0130 07:06:03.567351 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r7s58" Jan 30 07:06:03 crc kubenswrapper[4520]: I0130 07:06:03.572556 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d8h58"] Jan 30 07:06:03 crc kubenswrapper[4520]: I0130 07:06:03.667547 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f032fc-b82e-4882-ac68-186ccb98af34-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d8h58\" (UID: \"e5f032fc-b82e-4882-ac68-186ccb98af34\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d8h58" Jan 30 07:06:03 crc kubenswrapper[4520]: I0130 07:06:03.667622 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e5f032fc-b82e-4882-ac68-186ccb98af34-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d8h58\" (UID: \"e5f032fc-b82e-4882-ac68-186ccb98af34\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d8h58" Jan 30 07:06:03 crc kubenswrapper[4520]: I0130 07:06:03.667680 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqdn4\" (UniqueName: \"kubernetes.io/projected/e5f032fc-b82e-4882-ac68-186ccb98af34-kube-api-access-rqdn4\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d8h58\" (UID: \"e5f032fc-b82e-4882-ac68-186ccb98af34\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d8h58" Jan 30 07:06:03 crc kubenswrapper[4520]: I0130 07:06:03.667725 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5f032fc-b82e-4882-ac68-186ccb98af34-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d8h58\" (UID: \"e5f032fc-b82e-4882-ac68-186ccb98af34\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d8h58" Jan 30 07:06:03 crc kubenswrapper[4520]: I0130 07:06:03.674071 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f032fc-b82e-4882-ac68-186ccb98af34-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d8h58\" (UID: \"e5f032fc-b82e-4882-ac68-186ccb98af34\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d8h58" Jan 30 07:06:03 crc kubenswrapper[4520]: I0130 07:06:03.674231 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5f032fc-b82e-4882-ac68-186ccb98af34-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d8h58\" (UID: \"e5f032fc-b82e-4882-ac68-186ccb98af34\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d8h58" Jan 30 07:06:03 crc kubenswrapper[4520]: I0130 07:06:03.678969 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e5f032fc-b82e-4882-ac68-186ccb98af34-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d8h58\" (UID: \"e5f032fc-b82e-4882-ac68-186ccb98af34\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d8h58" Jan 30 07:06:03 crc kubenswrapper[4520]: I0130 07:06:03.690238 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqdn4\" (UniqueName: \"kubernetes.io/projected/e5f032fc-b82e-4882-ac68-186ccb98af34-kube-api-access-rqdn4\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d8h58\" (UID: \"e5f032fc-b82e-4882-ac68-186ccb98af34\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d8h58" Jan 30 07:06:03 crc kubenswrapper[4520]: I0130 07:06:03.869078 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d8h58" Jan 30 07:06:04 crc kubenswrapper[4520]: I0130 07:06:04.405680 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d8h58"] Jan 30 07:06:04 crc kubenswrapper[4520]: I0130 07:06:04.508076 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d8h58" event={"ID":"e5f032fc-b82e-4882-ac68-186ccb98af34","Type":"ContainerStarted","Data":"adeb76954f337772d62cb3c7716f7d672763868cd3362743c7b284cb6d1df6e6"} Jan 30 07:06:05 crc kubenswrapper[4520]: I0130 07:06:05.520740 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d8h58" event={"ID":"e5f032fc-b82e-4882-ac68-186ccb98af34","Type":"ContainerStarted","Data":"111dc0269e6b0b6273a3c1167260e33770718dacd9d8cf5d9bdf95d76fa0eff2"} Jan 30 07:06:05 crc kubenswrapper[4520]: I0130 07:06:05.546240 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d8h58" podStartSLOduration=2.069920854 podStartE2EDuration="2.546219001s" podCreationTimestamp="2026-01-30 07:06:03 +0000 UTC" firstStartedPulling="2026-01-30 07:06:04.40510321 +0000 UTC m=+1278.033455391" lastFinishedPulling="2026-01-30 07:06:04.881401367 +0000 UTC m=+1278.509753538" observedRunningTime="2026-01-30 07:06:05.532739347 +0000 UTC m=+1279.161091527" watchObservedRunningTime="2026-01-30 07:06:05.546219001 +0000 UTC m=+1279.174571182" Jan 30 07:06:54 crc kubenswrapper[4520]: I0130 07:06:54.624422 4520 scope.go:117] "RemoveContainer" containerID="ec8500b8477fefb4a8c65e86ad568dc4618d89be58fa29cf0eda07e8632c2b32" Jan 30 07:08:27 crc kubenswrapper[4520]: I0130 07:08:27.793861 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 07:08:27 crc kubenswrapper[4520]: I0130 07:08:27.794401 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 07:08:57 crc kubenswrapper[4520]: I0130 07:08:57.793386 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 07:08:57 crc kubenswrapper[4520]: I0130 07:08:57.793895 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 07:08:59 crc kubenswrapper[4520]: I0130 07:08:59.049604 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-744gm"] Jan 30 07:08:59 crc kubenswrapper[4520]: I0130 07:08:59.055428 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-38e4-account-create-update-vrcjp"] Jan 30 07:08:59 crc kubenswrapper[4520]: I0130 07:08:59.063992 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-4575-account-create-update-pxcsl"] Jan 30 07:08:59 crc kubenswrapper[4520]: I0130 07:08:59.069922 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-qwxpw"] Jan 30 07:08:59 crc kubenswrapper[4520]: I0130 07:08:59.075606 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-38e4-account-create-update-vrcjp"] Jan 30 07:08:59 crc kubenswrapper[4520]: I0130 07:08:59.080641 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-744gm"] Jan 30 07:08:59 crc kubenswrapper[4520]: I0130 07:08:59.086321 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-4575-account-create-update-pxcsl"] Jan 30 07:08:59 crc kubenswrapper[4520]: I0130 07:08:59.091376 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-qwxpw"] Jan 30 07:09:00 crc kubenswrapper[4520]: I0130 07:09:00.028657 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-2f92-account-create-update-4mpvt"] Jan 30 07:09:00 crc kubenswrapper[4520]: I0130 07:09:00.034260 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-bh5dv"] Jan 30 07:09:00 crc kubenswrapper[4520]: I0130 07:09:00.040563 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-2f92-account-create-update-4mpvt"] Jan 30 07:09:00 crc kubenswrapper[4520]: I0130 07:09:00.045579 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-bh5dv"] Jan 30 07:09:00 crc kubenswrapper[4520]: I0130 07:09:00.694784 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="638e5bb8-4a2a-42a5-ab4c-fd75e93b8efa" path="/var/lib/kubelet/pods/638e5bb8-4a2a-42a5-ab4c-fd75e93b8efa/volumes" Jan 30 07:09:00 crc kubenswrapper[4520]: I0130 07:09:00.696914 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66e50498-b61c-48eb-bd9b-002ad02fa6a0" path="/var/lib/kubelet/pods/66e50498-b61c-48eb-bd9b-002ad02fa6a0/volumes" Jan 30 07:09:00 crc kubenswrapper[4520]: I0130 07:09:00.698480 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87fa59f0-b4fd-472f-a612-b79fc97fec36" path="/var/lib/kubelet/pods/87fa59f0-b4fd-472f-a612-b79fc97fec36/volumes" Jan 30 07:09:00 crc kubenswrapper[4520]: I0130 07:09:00.700632 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96657e38-5386-49e4-9ea3-b12a72c31fdf" path="/var/lib/kubelet/pods/96657e38-5386-49e4-9ea3-b12a72c31fdf/volumes" Jan 30 07:09:00 crc kubenswrapper[4520]: I0130 07:09:00.702396 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e19c8d9-7b99-4827-9b6b-c786a3600c46" path="/var/lib/kubelet/pods/9e19c8d9-7b99-4827-9b6b-c786a3600c46/volumes" Jan 30 07:09:00 crc kubenswrapper[4520]: I0130 07:09:00.704383 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c74d5704-4893-46ce-912f-20805c99608c" path="/var/lib/kubelet/pods/c74d5704-4893-46ce-912f-20805c99608c/volumes" Jan 30 07:09:05 crc kubenswrapper[4520]: I0130 07:09:05.112826 4520 generic.go:334] "Generic (PLEG): container finished" podID="e5f032fc-b82e-4882-ac68-186ccb98af34" containerID="111dc0269e6b0b6273a3c1167260e33770718dacd9d8cf5d9bdf95d76fa0eff2" exitCode=0 Jan 30 07:09:05 crc kubenswrapper[4520]: I0130 07:09:05.112901 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d8h58" event={"ID":"e5f032fc-b82e-4882-ac68-186ccb98af34","Type":"ContainerDied","Data":"111dc0269e6b0b6273a3c1167260e33770718dacd9d8cf5d9bdf95d76fa0eff2"} Jan 30 07:09:06 crc kubenswrapper[4520]: I0130 07:09:06.459974 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d8h58" Jan 30 07:09:06 crc kubenswrapper[4520]: I0130 07:09:06.658800 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqdn4\" (UniqueName: \"kubernetes.io/projected/e5f032fc-b82e-4882-ac68-186ccb98af34-kube-api-access-rqdn4\") pod \"e5f032fc-b82e-4882-ac68-186ccb98af34\" (UID: \"e5f032fc-b82e-4882-ac68-186ccb98af34\") " Jan 30 07:09:06 crc kubenswrapper[4520]: I0130 07:09:06.659321 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5f032fc-b82e-4882-ac68-186ccb98af34-inventory\") pod \"e5f032fc-b82e-4882-ac68-186ccb98af34\" (UID: \"e5f032fc-b82e-4882-ac68-186ccb98af34\") " Jan 30 07:09:06 crc kubenswrapper[4520]: I0130 07:09:06.659400 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e5f032fc-b82e-4882-ac68-186ccb98af34-ssh-key-openstack-edpm-ipam\") pod \"e5f032fc-b82e-4882-ac68-186ccb98af34\" (UID: \"e5f032fc-b82e-4882-ac68-186ccb98af34\") " Jan 30 07:09:06 crc kubenswrapper[4520]: I0130 07:09:06.659534 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f032fc-b82e-4882-ac68-186ccb98af34-bootstrap-combined-ca-bundle\") pod \"e5f032fc-b82e-4882-ac68-186ccb98af34\" (UID: \"e5f032fc-b82e-4882-ac68-186ccb98af34\") " Jan 30 07:09:06 crc kubenswrapper[4520]: I0130 07:09:06.677545 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5f032fc-b82e-4882-ac68-186ccb98af34-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e5f032fc-b82e-4882-ac68-186ccb98af34" (UID: "e5f032fc-b82e-4882-ac68-186ccb98af34"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:09:06 crc kubenswrapper[4520]: I0130 07:09:06.677576 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5f032fc-b82e-4882-ac68-186ccb98af34-kube-api-access-rqdn4" (OuterVolumeSpecName: "kube-api-access-rqdn4") pod "e5f032fc-b82e-4882-ac68-186ccb98af34" (UID: "e5f032fc-b82e-4882-ac68-186ccb98af34"). InnerVolumeSpecName "kube-api-access-rqdn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:09:06 crc kubenswrapper[4520]: I0130 07:09:06.682420 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5f032fc-b82e-4882-ac68-186ccb98af34-inventory" (OuterVolumeSpecName: "inventory") pod "e5f032fc-b82e-4882-ac68-186ccb98af34" (UID: "e5f032fc-b82e-4882-ac68-186ccb98af34"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:09:06 crc kubenswrapper[4520]: I0130 07:09:06.682764 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5f032fc-b82e-4882-ac68-186ccb98af34-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e5f032fc-b82e-4882-ac68-186ccb98af34" (UID: "e5f032fc-b82e-4882-ac68-186ccb98af34"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:09:06 crc kubenswrapper[4520]: I0130 07:09:06.762636 4520 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f032fc-b82e-4882-ac68-186ccb98af34-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:09:06 crc kubenswrapper[4520]: I0130 07:09:06.763422 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqdn4\" (UniqueName: \"kubernetes.io/projected/e5f032fc-b82e-4882-ac68-186ccb98af34-kube-api-access-rqdn4\") on node \"crc\" DevicePath \"\"" Jan 30 07:09:06 crc kubenswrapper[4520]: I0130 07:09:06.763581 4520 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5f032fc-b82e-4882-ac68-186ccb98af34-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 07:09:06 crc kubenswrapper[4520]: I0130 07:09:06.763783 4520 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e5f032fc-b82e-4882-ac68-186ccb98af34-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 07:09:07 crc kubenswrapper[4520]: I0130 07:09:07.130567 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d8h58" event={"ID":"e5f032fc-b82e-4882-ac68-186ccb98af34","Type":"ContainerDied","Data":"adeb76954f337772d62cb3c7716f7d672763868cd3362743c7b284cb6d1df6e6"} Jan 30 07:09:07 crc kubenswrapper[4520]: I0130 07:09:07.130620 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adeb76954f337772d62cb3c7716f7d672763868cd3362743c7b284cb6d1df6e6" Jan 30 07:09:07 crc kubenswrapper[4520]: I0130 07:09:07.130628 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d8h58" Jan 30 07:09:07 crc kubenswrapper[4520]: I0130 07:09:07.203949 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kf7fw"] Jan 30 07:09:07 crc kubenswrapper[4520]: E0130 07:09:07.204335 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5f032fc-b82e-4882-ac68-186ccb98af34" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 30 07:09:07 crc kubenswrapper[4520]: I0130 07:09:07.204352 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5f032fc-b82e-4882-ac68-186ccb98af34" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 30 07:09:07 crc kubenswrapper[4520]: I0130 07:09:07.204526 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5f032fc-b82e-4882-ac68-186ccb98af34" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 30 07:09:07 crc kubenswrapper[4520]: I0130 07:09:07.205127 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kf7fw" Jan 30 07:09:07 crc kubenswrapper[4520]: I0130 07:09:07.207178 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 07:09:07 crc kubenswrapper[4520]: I0130 07:09:07.207715 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 07:09:07 crc kubenswrapper[4520]: I0130 07:09:07.208217 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 07:09:07 crc kubenswrapper[4520]: I0130 07:09:07.208483 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r7s58" Jan 30 07:09:07 crc kubenswrapper[4520]: I0130 07:09:07.219808 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kf7fw"] Jan 30 07:09:07 crc kubenswrapper[4520]: I0130 07:09:07.274350 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce5ace07-4153-4a1d-b920-18f0d97db7ac-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kf7fw\" (UID: \"ce5ace07-4153-4a1d-b920-18f0d97db7ac\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kf7fw" Jan 30 07:09:07 crc kubenswrapper[4520]: I0130 07:09:07.274471 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf6fd\" (UniqueName: \"kubernetes.io/projected/ce5ace07-4153-4a1d-b920-18f0d97db7ac-kube-api-access-bf6fd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kf7fw\" (UID: \"ce5ace07-4153-4a1d-b920-18f0d97db7ac\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kf7fw" Jan 30 07:09:07 crc kubenswrapper[4520]: I0130 07:09:07.274531 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce5ace07-4153-4a1d-b920-18f0d97db7ac-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kf7fw\" (UID: \"ce5ace07-4153-4a1d-b920-18f0d97db7ac\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kf7fw" Jan 30 07:09:07 crc kubenswrapper[4520]: I0130 07:09:07.376172 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce5ace07-4153-4a1d-b920-18f0d97db7ac-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kf7fw\" (UID: \"ce5ace07-4153-4a1d-b920-18f0d97db7ac\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kf7fw" Jan 30 07:09:07 crc kubenswrapper[4520]: I0130 07:09:07.376233 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf6fd\" (UniqueName: \"kubernetes.io/projected/ce5ace07-4153-4a1d-b920-18f0d97db7ac-kube-api-access-bf6fd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kf7fw\" (UID: \"ce5ace07-4153-4a1d-b920-18f0d97db7ac\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kf7fw" Jan 30 07:09:07 crc kubenswrapper[4520]: I0130 07:09:07.376264 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce5ace07-4153-4a1d-b920-18f0d97db7ac-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kf7fw\" (UID: \"ce5ace07-4153-4a1d-b920-18f0d97db7ac\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kf7fw" Jan 30 07:09:07 crc kubenswrapper[4520]: I0130 07:09:07.385115 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce5ace07-4153-4a1d-b920-18f0d97db7ac-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kf7fw\" (UID: \"ce5ace07-4153-4a1d-b920-18f0d97db7ac\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kf7fw" Jan 30 07:09:07 crc kubenswrapper[4520]: I0130 07:09:07.392032 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce5ace07-4153-4a1d-b920-18f0d97db7ac-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kf7fw\" (UID: \"ce5ace07-4153-4a1d-b920-18f0d97db7ac\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kf7fw" Jan 30 07:09:07 crc kubenswrapper[4520]: I0130 07:09:07.394774 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf6fd\" (UniqueName: \"kubernetes.io/projected/ce5ace07-4153-4a1d-b920-18f0d97db7ac-kube-api-access-bf6fd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kf7fw\" (UID: \"ce5ace07-4153-4a1d-b920-18f0d97db7ac\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kf7fw" Jan 30 07:09:07 crc kubenswrapper[4520]: I0130 07:09:07.520857 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kf7fw" Jan 30 07:09:07 crc kubenswrapper[4520]: I0130 07:09:07.988664 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kf7fw"] Jan 30 07:09:07 crc kubenswrapper[4520]: I0130 07:09:07.997159 4520 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 07:09:08 crc kubenswrapper[4520]: I0130 07:09:08.142668 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kf7fw" event={"ID":"ce5ace07-4153-4a1d-b920-18f0d97db7ac","Type":"ContainerStarted","Data":"ee990f2c1bb55650d57cb2f7bca6c278f539b7045365bce30ced08128536641e"} Jan 30 07:09:09 crc kubenswrapper[4520]: I0130 07:09:09.155125 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kf7fw" event={"ID":"ce5ace07-4153-4a1d-b920-18f0d97db7ac","Type":"ContainerStarted","Data":"ebdb3c7ffd749b5f67dd266c4ea9f01aab8b7c3cffd6cfaf5687ac4fa10411f5"} Jan 30 07:09:09 crc kubenswrapper[4520]: I0130 07:09:09.193887 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kf7fw" podStartSLOduration=1.721995691 podStartE2EDuration="2.193873247s" podCreationTimestamp="2026-01-30 07:09:07 +0000 UTC" firstStartedPulling="2026-01-30 07:09:07.996937939 +0000 UTC m=+1461.625290120" lastFinishedPulling="2026-01-30 07:09:08.468815495 +0000 UTC m=+1462.097167676" observedRunningTime="2026-01-30 07:09:09.191117827 +0000 UTC m=+1462.819470008" watchObservedRunningTime="2026-01-30 07:09:09.193873247 +0000 UTC m=+1462.822225429" Jan 30 07:09:13 crc kubenswrapper[4520]: I0130 07:09:13.024069 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-rfg99"] Jan 30 07:09:13 crc kubenswrapper[4520]: I0130 07:09:13.029862 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-rfg99"] Jan 30 07:09:14 crc kubenswrapper[4520]: I0130 07:09:14.696753 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="849821f7-8f89-49da-b649-5cd380b989a7" path="/var/lib/kubelet/pods/849821f7-8f89-49da-b649-5cd380b989a7/volumes" Jan 30 07:09:27 crc kubenswrapper[4520]: I0130 07:09:27.794111 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 07:09:27 crc kubenswrapper[4520]: I0130 07:09:27.794670 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 07:09:27 crc kubenswrapper[4520]: I0130 07:09:27.794724 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" Jan 30 07:09:27 crc kubenswrapper[4520]: I0130 07:09:27.795372 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b00f4ab612613c1cf7c10de4b942ca02f5ce93773b4911ac63542d5a5740888c"} pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 07:09:27 crc kubenswrapper[4520]: I0130 07:09:27.795421 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" containerID="cri-o://b00f4ab612613c1cf7c10de4b942ca02f5ce93773b4911ac63542d5a5740888c" gracePeriod=600 Jan 30 07:09:28 crc kubenswrapper[4520]: I0130 07:09:28.300115 4520 generic.go:334] "Generic (PLEG): container finished" podID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerID="b00f4ab612613c1cf7c10de4b942ca02f5ce93773b4911ac63542d5a5740888c" exitCode=0 Jan 30 07:09:28 crc kubenswrapper[4520]: I0130 07:09:28.300168 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerDied","Data":"b00f4ab612613c1cf7c10de4b942ca02f5ce93773b4911ac63542d5a5740888c"} Jan 30 07:09:28 crc kubenswrapper[4520]: I0130 07:09:28.300325 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerStarted","Data":"3511c403ecc0670dedcbeb455988f781d984e79ff36ca09f0a0274a95f203ca7"} Jan 30 07:09:28 crc kubenswrapper[4520]: I0130 07:09:28.300351 4520 scope.go:117] "RemoveContainer" containerID="3900247724d53d5b578bd24f8556d2a19d19cf1623714d7b59ae28dee17ff16f" Jan 30 07:09:30 crc kubenswrapper[4520]: I0130 07:09:30.046237 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-fglmz"] Jan 30 07:09:30 crc kubenswrapper[4520]: I0130 07:09:30.053841 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-fglmz"] Jan 30 07:09:30 crc kubenswrapper[4520]: I0130 07:09:30.694192 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddd50154-e55a-4dae-ac2d-3528b94ff9f6" path="/var/lib/kubelet/pods/ddd50154-e55a-4dae-ac2d-3528b94ff9f6/volumes" Jan 30 07:09:44 crc kubenswrapper[4520]: I0130 07:09:44.030447 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-1396-account-create-update-qtf79"] Jan 30 07:09:44 crc kubenswrapper[4520]: I0130 07:09:44.034398 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-1396-account-create-update-qtf79"] Jan 30 07:09:44 crc kubenswrapper[4520]: I0130 07:09:44.698210 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88a57446-d8a7-45ce-ac2a-1704429731a7" path="/var/lib/kubelet/pods/88a57446-d8a7-45ce-ac2a-1704429731a7/volumes" Jan 30 07:09:45 crc kubenswrapper[4520]: I0130 07:09:45.035167 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-6cc0-account-create-update-bj5pr"] Jan 30 07:09:45 crc kubenswrapper[4520]: I0130 07:09:45.040738 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-ac5a-account-create-update-xh94n"] Jan 30 07:09:45 crc kubenswrapper[4520]: I0130 07:09:45.045936 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b11b-account-create-update-js482"] Jan 30 07:09:45 crc kubenswrapper[4520]: I0130 07:09:45.051298 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-ac5a-account-create-update-xh94n"] Jan 30 07:09:45 crc kubenswrapper[4520]: I0130 07:09:45.056435 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-kljc9"] Jan 30 07:09:45 crc kubenswrapper[4520]: I0130 07:09:45.064609 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-b11b-account-create-update-js482"] Jan 30 07:09:45 crc kubenswrapper[4520]: I0130 07:09:45.069393 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-6cc0-account-create-update-bj5pr"] Jan 30 07:09:45 crc kubenswrapper[4520]: I0130 07:09:45.074121 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-p84rh"] Jan 30 07:09:45 crc kubenswrapper[4520]: I0130 07:09:45.078633 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-kljc9"] Jan 30 07:09:45 crc kubenswrapper[4520]: I0130 07:09:45.083159 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-qccxn"] Jan 30 07:09:45 crc kubenswrapper[4520]: I0130 07:09:45.087711 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-qccxn"] Jan 30 07:09:45 crc kubenswrapper[4520]: I0130 07:09:45.092196 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-k9p6j"] Jan 30 07:09:45 crc kubenswrapper[4520]: I0130 07:09:45.096780 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-p84rh"] Jan 30 07:09:45 crc kubenswrapper[4520]: I0130 07:09:45.102010 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-k9p6j"] Jan 30 07:09:46 crc kubenswrapper[4520]: I0130 07:09:46.697141 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1de5d64c-937a-41c9-b68c-8832b18aabf1" path="/var/lib/kubelet/pods/1de5d64c-937a-41c9-b68c-8832b18aabf1/volumes" Jan 30 07:09:46 crc kubenswrapper[4520]: I0130 07:09:46.699248 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ae04536-592c-4d7c-bbeb-8ef1df3370a7" path="/var/lib/kubelet/pods/7ae04536-592c-4d7c-bbeb-8ef1df3370a7/volumes" Jan 30 07:09:46 crc kubenswrapper[4520]: I0130 07:09:46.700433 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="839c4efd-2ebb-43d0-9bdb-8dcd83737a8a" path="/var/lib/kubelet/pods/839c4efd-2ebb-43d0-9bdb-8dcd83737a8a/volumes" Jan 30 07:09:46 crc kubenswrapper[4520]: I0130 07:09:46.702856 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c29e2dd5-25c0-4c49-8d73-30db73b5bc36" path="/var/lib/kubelet/pods/c29e2dd5-25c0-4c49-8d73-30db73b5bc36/volumes" Jan 30 07:09:46 crc kubenswrapper[4520]: I0130 07:09:46.705380 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4564e1a-9135-4edd-842b-e4954834ae5d" path="/var/lib/kubelet/pods/d4564e1a-9135-4edd-842b-e4954834ae5d/volumes" Jan 30 07:09:46 crc kubenswrapper[4520]: I0130 07:09:46.707111 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db4d1798-73e8-4315-87d5-e638d87abfd5" path="/var/lib/kubelet/pods/db4d1798-73e8-4315-87d5-e638d87abfd5/volumes" Jan 30 07:09:46 crc kubenswrapper[4520]: I0130 07:09:46.708888 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdd5fd9c-aeca-4fcd-9efa-f0d5e470b925" path="/var/lib/kubelet/pods/fdd5fd9c-aeca-4fcd-9efa-f0d5e470b925/volumes" Jan 30 07:09:54 crc kubenswrapper[4520]: I0130 07:09:54.037437 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-qhxqf"] Jan 30 07:09:54 crc kubenswrapper[4520]: I0130 07:09:54.044013 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-qhxqf"] Jan 30 07:09:54 crc kubenswrapper[4520]: I0130 07:09:54.695834 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f732baff-71b8-4edc-8ec9-ebf30a096f74" path="/var/lib/kubelet/pods/f732baff-71b8-4edc-8ec9-ebf30a096f74/volumes" Jan 30 07:09:54 crc kubenswrapper[4520]: I0130 07:09:54.731955 4520 scope.go:117] "RemoveContainer" containerID="fc868ea9c3011014ed19a09e1b9da359fb2a8407c9da21c4e31470935afbf713" Jan 30 07:09:54 crc kubenswrapper[4520]: I0130 07:09:54.754103 4520 scope.go:117] "RemoveContainer" containerID="372522daaefb177fec3e4a8baa548fefb8b78690aaf6fe6803803b69586eb98e" Jan 30 07:09:54 crc kubenswrapper[4520]: I0130 07:09:54.791192 4520 scope.go:117] "RemoveContainer" containerID="7fbb3867084f1a5cd03fa0e5aaa627b1ebcfe0697d8ea7a9c13b5ebe902a303f" Jan 30 07:09:54 crc kubenswrapper[4520]: I0130 07:09:54.822480 4520 scope.go:117] "RemoveContainer" containerID="758237f69ab238cc777b2a0d458c5a1e73ee0b2b2600269700316f61c3cd66b4" Jan 30 07:09:54 crc kubenswrapper[4520]: I0130 07:09:54.853320 4520 scope.go:117] "RemoveContainer" containerID="b06002c5d1465d1a40ff40fa02c602b02c1221af7c39b605a28729504bfb6dbd" Jan 30 07:09:54 crc kubenswrapper[4520]: I0130 07:09:54.883502 4520 scope.go:117] "RemoveContainer" containerID="379719d2c1d5df021ee12c4afebd31b8461549ce4158e7cc4df54df507a95489" Jan 30 07:09:54 crc kubenswrapper[4520]: I0130 07:09:54.914363 4520 scope.go:117] "RemoveContainer" containerID="3c3de59591f4d8d268f7172f764b47db2d324b4622902239d22e6722b65411b5" Jan 30 07:09:54 crc kubenswrapper[4520]: I0130 07:09:54.931880 4520 scope.go:117] "RemoveContainer" containerID="28b6ac33e87ee15d265f0f8151e19a85c43f78fcd6194bdcd67b8c5c90ea3bf1" Jan 30 07:09:54 crc kubenswrapper[4520]: I0130 07:09:54.949189 4520 scope.go:117] "RemoveContainer" containerID="6f36f78133aa3919d8172980469d2aaaa9ff7cdb5084aa74fbb93265684545b3" Jan 30 07:09:54 crc kubenswrapper[4520]: I0130 07:09:54.963982 4520 scope.go:117] "RemoveContainer" containerID="b2bb100ba44a7fbaeb33c0fa46c1c5aa4d4088d24572ea65caa31dec2a0d9076" Jan 30 07:09:54 crc kubenswrapper[4520]: I0130 07:09:54.980554 4520 scope.go:117] "RemoveContainer" containerID="b1edc1d10fcef42e5c5803ab0bd3d7da3ca1bf726d33280b07680bf4d499eb6a" Jan 30 07:09:54 crc kubenswrapper[4520]: I0130 07:09:54.996797 4520 scope.go:117] "RemoveContainer" containerID="2bbecffc128a1431bd85a48a173bb01c6bbf667018c7e9cc5e6bb9501c39e6e5" Jan 30 07:09:55 crc kubenswrapper[4520]: I0130 07:09:55.012011 4520 scope.go:117] "RemoveContainer" containerID="5d3f08c4faa574c6d0b2cb26aec43dcf834a75527dd4431b1c1c815ed5cf3015" Jan 30 07:09:55 crc kubenswrapper[4520]: I0130 07:09:55.032918 4520 scope.go:117] "RemoveContainer" containerID="289ebc1ab0b8a8c94ac6c572c1802c8b9640d3a498143f42e8c79ed0b23ae451" Jan 30 07:09:55 crc kubenswrapper[4520]: I0130 07:09:55.048909 4520 scope.go:117] "RemoveContainer" containerID="d827e9732e87b1e4d40924886162fe5254321ebf7c4533d3b8e36daf3df66ba0" Jan 30 07:09:55 crc kubenswrapper[4520]: I0130 07:09:55.062275 4520 scope.go:117] "RemoveContainer" containerID="6fd5e39d8888aa8e32e72b902823ecfec2d8a4862fdfb54ce350acbed6441994" Jan 30 07:09:55 crc kubenswrapper[4520]: I0130 07:09:55.078210 4520 scope.go:117] "RemoveContainer" containerID="81ca8543fbaf000cc130b3729de0fed46bc96673b7d5480676bf7004508096fe" Jan 30 07:10:27 crc kubenswrapper[4520]: I0130 07:10:27.032564 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-qgzqb"] Jan 30 07:10:27 crc kubenswrapper[4520]: I0130 07:10:27.039474 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-qgzqb"] Jan 30 07:10:28 crc kubenswrapper[4520]: I0130 07:10:28.694674 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c46098fe-52c7-4a41-9a00-d156d5bfc4be" path="/var/lib/kubelet/pods/c46098fe-52c7-4a41-9a00-d156d5bfc4be/volumes" Jan 30 07:10:43 crc kubenswrapper[4520]: I0130 07:10:43.026542 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-t8smt"] Jan 30 07:10:43 crc kubenswrapper[4520]: I0130 07:10:43.032215 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-t8smt"] Jan 30 07:10:44 crc kubenswrapper[4520]: I0130 07:10:44.694196 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b1fa358-6b62-4cf6-a32c-89e98f169b42" path="/var/lib/kubelet/pods/0b1fa358-6b62-4cf6-a32c-89e98f169b42/volumes" Jan 30 07:10:50 crc kubenswrapper[4520]: I0130 07:10:50.029875 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-zrdtq"] Jan 30 07:10:50 crc kubenswrapper[4520]: I0130 07:10:50.035020 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-zrdtq"] Jan 30 07:10:50 crc kubenswrapper[4520]: I0130 07:10:50.694684 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df706708-e03c-4d6e-ac65-229a419d653f" path="/var/lib/kubelet/pods/df706708-e03c-4d6e-ac65-229a419d653f/volumes" Jan 30 07:10:55 crc kubenswrapper[4520]: I0130 07:10:55.305982 4520 scope.go:117] "RemoveContainer" containerID="2226d52d6e6304b6b79d579ee87ea0fef4db05f734eed40dc74be6d9a62eff0b" Jan 30 07:10:55 crc kubenswrapper[4520]: I0130 07:10:55.331618 4520 scope.go:117] "RemoveContainer" containerID="acf5d09c7e94ac9bf5c6318c5b1c6a00d87a60284ad2d32f701bf5a5c0ee6bee" Jan 30 07:10:55 crc kubenswrapper[4520]: I0130 07:10:55.365488 4520 scope.go:117] "RemoveContainer" containerID="5be57067c7407f6aa6d3be338b06ad9bc6ef28560cd5e542dacb862e6d6dba31" Jan 30 07:10:58 crc kubenswrapper[4520]: I0130 07:10:58.021030 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-ld8j2"] Jan 30 07:10:58 crc kubenswrapper[4520]: I0130 07:10:58.027002 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-ld8j2"] Jan 30 07:10:58 crc kubenswrapper[4520]: I0130 07:10:58.693849 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77b507ad-cda3-49b8-9a29-4c10ce6c1ac4" path="/var/lib/kubelet/pods/77b507ad-cda3-49b8-9a29-4c10ce6c1ac4/volumes" Jan 30 07:10:59 crc kubenswrapper[4520]: I0130 07:10:59.022559 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-qndsg"] Jan 30 07:10:59 crc kubenswrapper[4520]: I0130 07:10:59.025418 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-qndsg"] Jan 30 07:11:00 crc kubenswrapper[4520]: I0130 07:11:00.694614 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1771d5c5-4904-435a-81ac-80eaaf23bc68" path="/var/lib/kubelet/pods/1771d5c5-4904-435a-81ac-80eaaf23bc68/volumes" Jan 30 07:11:01 crc kubenswrapper[4520]: I0130 07:11:01.028735 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-xgsxk"] Jan 30 07:11:01 crc kubenswrapper[4520]: I0130 07:11:01.035946 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-xgsxk"] Jan 30 07:11:01 crc kubenswrapper[4520]: I0130 07:11:01.636616 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mpnxk"] Jan 30 07:11:01 crc kubenswrapper[4520]: I0130 07:11:01.638473 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mpnxk" Jan 30 07:11:01 crc kubenswrapper[4520]: I0130 07:11:01.648260 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mpnxk"] Jan 30 07:11:01 crc kubenswrapper[4520]: I0130 07:11:01.678079 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jxcg\" (UniqueName: \"kubernetes.io/projected/23b44211-1536-4155-a7e1-bca42426cf0a-kube-api-access-4jxcg\") pod \"community-operators-mpnxk\" (UID: \"23b44211-1536-4155-a7e1-bca42426cf0a\") " pod="openshift-marketplace/community-operators-mpnxk" Jan 30 07:11:01 crc kubenswrapper[4520]: I0130 07:11:01.678218 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23b44211-1536-4155-a7e1-bca42426cf0a-catalog-content\") pod \"community-operators-mpnxk\" (UID: \"23b44211-1536-4155-a7e1-bca42426cf0a\") " pod="openshift-marketplace/community-operators-mpnxk" Jan 30 07:11:01 crc kubenswrapper[4520]: I0130 07:11:01.678274 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23b44211-1536-4155-a7e1-bca42426cf0a-utilities\") pod \"community-operators-mpnxk\" (UID: \"23b44211-1536-4155-a7e1-bca42426cf0a\") " pod="openshift-marketplace/community-operators-mpnxk" Jan 30 07:11:01 crc kubenswrapper[4520]: I0130 07:11:01.779939 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jxcg\" (UniqueName: \"kubernetes.io/projected/23b44211-1536-4155-a7e1-bca42426cf0a-kube-api-access-4jxcg\") pod \"community-operators-mpnxk\" (UID: \"23b44211-1536-4155-a7e1-bca42426cf0a\") " pod="openshift-marketplace/community-operators-mpnxk" Jan 30 07:11:01 crc kubenswrapper[4520]: I0130 07:11:01.780127 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23b44211-1536-4155-a7e1-bca42426cf0a-catalog-content\") pod \"community-operators-mpnxk\" (UID: \"23b44211-1536-4155-a7e1-bca42426cf0a\") " pod="openshift-marketplace/community-operators-mpnxk" Jan 30 07:11:01 crc kubenswrapper[4520]: I0130 07:11:01.780198 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23b44211-1536-4155-a7e1-bca42426cf0a-utilities\") pod \"community-operators-mpnxk\" (UID: \"23b44211-1536-4155-a7e1-bca42426cf0a\") " pod="openshift-marketplace/community-operators-mpnxk" Jan 30 07:11:01 crc kubenswrapper[4520]: I0130 07:11:01.780654 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23b44211-1536-4155-a7e1-bca42426cf0a-utilities\") pod \"community-operators-mpnxk\" (UID: \"23b44211-1536-4155-a7e1-bca42426cf0a\") " pod="openshift-marketplace/community-operators-mpnxk" Jan 30 07:11:01 crc kubenswrapper[4520]: I0130 07:11:01.780902 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23b44211-1536-4155-a7e1-bca42426cf0a-catalog-content\") pod \"community-operators-mpnxk\" (UID: \"23b44211-1536-4155-a7e1-bca42426cf0a\") " pod="openshift-marketplace/community-operators-mpnxk" Jan 30 07:11:01 crc kubenswrapper[4520]: I0130 07:11:01.802301 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jxcg\" (UniqueName: \"kubernetes.io/projected/23b44211-1536-4155-a7e1-bca42426cf0a-kube-api-access-4jxcg\") pod \"community-operators-mpnxk\" (UID: \"23b44211-1536-4155-a7e1-bca42426cf0a\") " pod="openshift-marketplace/community-operators-mpnxk" Jan 30 07:11:01 crc kubenswrapper[4520]: I0130 07:11:01.952675 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mpnxk" Jan 30 07:11:02 crc kubenswrapper[4520]: I0130 07:11:02.526410 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mpnxk"] Jan 30 07:11:02 crc kubenswrapper[4520]: I0130 07:11:02.696413 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc2063bc-3a1e-4e9f-badc-299e256a2f3c" path="/var/lib/kubelet/pods/fc2063bc-3a1e-4e9f-badc-299e256a2f3c/volumes" Jan 30 07:11:03 crc kubenswrapper[4520]: I0130 07:11:03.025327 4520 generic.go:334] "Generic (PLEG): container finished" podID="23b44211-1536-4155-a7e1-bca42426cf0a" containerID="fa08e5027a9e6292b69af10b0df5e402b1165bc5bf2c057e8f7aace909ac592e" exitCode=0 Jan 30 07:11:03 crc kubenswrapper[4520]: I0130 07:11:03.025375 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mpnxk" event={"ID":"23b44211-1536-4155-a7e1-bca42426cf0a","Type":"ContainerDied","Data":"fa08e5027a9e6292b69af10b0df5e402b1165bc5bf2c057e8f7aace909ac592e"} Jan 30 07:11:03 crc kubenswrapper[4520]: I0130 07:11:03.025422 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mpnxk" event={"ID":"23b44211-1536-4155-a7e1-bca42426cf0a","Type":"ContainerStarted","Data":"a590784ca61b5c6aa5ef99452ba9c2f846c795d72fa29f49ac2f37a7a58ae075"} Jan 30 07:11:04 crc kubenswrapper[4520]: I0130 07:11:04.032599 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mpnxk" event={"ID":"23b44211-1536-4155-a7e1-bca42426cf0a","Type":"ContainerStarted","Data":"65afed645ee4a64bc1a4f1e6487999f35ffabb13e7784a7d38d3d3971a443be3"} Jan 30 07:11:05 crc kubenswrapper[4520]: I0130 07:11:05.040368 4520 generic.go:334] "Generic (PLEG): container finished" podID="23b44211-1536-4155-a7e1-bca42426cf0a" containerID="65afed645ee4a64bc1a4f1e6487999f35ffabb13e7784a7d38d3d3971a443be3" exitCode=0 Jan 30 07:11:05 crc kubenswrapper[4520]: I0130 07:11:05.040642 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mpnxk" event={"ID":"23b44211-1536-4155-a7e1-bca42426cf0a","Type":"ContainerDied","Data":"65afed645ee4a64bc1a4f1e6487999f35ffabb13e7784a7d38d3d3971a443be3"} Jan 30 07:11:06 crc kubenswrapper[4520]: I0130 07:11:06.049478 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mpnxk" event={"ID":"23b44211-1536-4155-a7e1-bca42426cf0a","Type":"ContainerStarted","Data":"5a6b613cfdcd9d25d73e1e332c3337abb14983c1be8bf941d8e1abec80eb623e"} Jan 30 07:11:06 crc kubenswrapper[4520]: I0130 07:11:06.064756 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mpnxk" podStartSLOduration=2.585181273 podStartE2EDuration="5.064740385s" podCreationTimestamp="2026-01-30 07:11:01 +0000 UTC" firstStartedPulling="2026-01-30 07:11:03.026903172 +0000 UTC m=+1576.655255353" lastFinishedPulling="2026-01-30 07:11:05.506462284 +0000 UTC m=+1579.134814465" observedRunningTime="2026-01-30 07:11:06.061765221 +0000 UTC m=+1579.690117402" watchObservedRunningTime="2026-01-30 07:11:06.064740385 +0000 UTC m=+1579.693092557" Jan 30 07:11:11 crc kubenswrapper[4520]: I0130 07:11:11.952770 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mpnxk" Jan 30 07:11:11 crc kubenswrapper[4520]: I0130 07:11:11.953244 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mpnxk" Jan 30 07:11:11 crc kubenswrapper[4520]: I0130 07:11:11.986391 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mpnxk" Jan 30 07:11:12 crc kubenswrapper[4520]: I0130 07:11:12.117057 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mpnxk" Jan 30 07:11:12 crc kubenswrapper[4520]: I0130 07:11:12.214065 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mpnxk"] Jan 30 07:11:14 crc kubenswrapper[4520]: I0130 07:11:14.097124 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mpnxk" podUID="23b44211-1536-4155-a7e1-bca42426cf0a" containerName="registry-server" containerID="cri-o://5a6b613cfdcd9d25d73e1e332c3337abb14983c1be8bf941d8e1abec80eb623e" gracePeriod=2 Jan 30 07:11:14 crc kubenswrapper[4520]: I0130 07:11:14.499056 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mpnxk" Jan 30 07:11:14 crc kubenswrapper[4520]: I0130 07:11:14.674216 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23b44211-1536-4155-a7e1-bca42426cf0a-catalog-content\") pod \"23b44211-1536-4155-a7e1-bca42426cf0a\" (UID: \"23b44211-1536-4155-a7e1-bca42426cf0a\") " Jan 30 07:11:14 crc kubenswrapper[4520]: I0130 07:11:14.674451 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jxcg\" (UniqueName: \"kubernetes.io/projected/23b44211-1536-4155-a7e1-bca42426cf0a-kube-api-access-4jxcg\") pod \"23b44211-1536-4155-a7e1-bca42426cf0a\" (UID: \"23b44211-1536-4155-a7e1-bca42426cf0a\") " Jan 30 07:11:14 crc kubenswrapper[4520]: I0130 07:11:14.674609 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23b44211-1536-4155-a7e1-bca42426cf0a-utilities\") pod \"23b44211-1536-4155-a7e1-bca42426cf0a\" (UID: \"23b44211-1536-4155-a7e1-bca42426cf0a\") " Jan 30 07:11:14 crc kubenswrapper[4520]: I0130 07:11:14.676387 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23b44211-1536-4155-a7e1-bca42426cf0a-utilities" (OuterVolumeSpecName: "utilities") pod "23b44211-1536-4155-a7e1-bca42426cf0a" (UID: "23b44211-1536-4155-a7e1-bca42426cf0a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:11:14 crc kubenswrapper[4520]: I0130 07:11:14.680761 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23b44211-1536-4155-a7e1-bca42426cf0a-kube-api-access-4jxcg" (OuterVolumeSpecName: "kube-api-access-4jxcg") pod "23b44211-1536-4155-a7e1-bca42426cf0a" (UID: "23b44211-1536-4155-a7e1-bca42426cf0a"). InnerVolumeSpecName "kube-api-access-4jxcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:11:14 crc kubenswrapper[4520]: I0130 07:11:14.731554 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23b44211-1536-4155-a7e1-bca42426cf0a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "23b44211-1536-4155-a7e1-bca42426cf0a" (UID: "23b44211-1536-4155-a7e1-bca42426cf0a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:11:14 crc kubenswrapper[4520]: I0130 07:11:14.776962 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23b44211-1536-4155-a7e1-bca42426cf0a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 07:11:14 crc kubenswrapper[4520]: I0130 07:11:14.776988 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jxcg\" (UniqueName: \"kubernetes.io/projected/23b44211-1536-4155-a7e1-bca42426cf0a-kube-api-access-4jxcg\") on node \"crc\" DevicePath \"\"" Jan 30 07:11:14 crc kubenswrapper[4520]: I0130 07:11:14.776999 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23b44211-1536-4155-a7e1-bca42426cf0a-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 07:11:15 crc kubenswrapper[4520]: I0130 07:11:15.105672 4520 generic.go:334] "Generic (PLEG): container finished" podID="23b44211-1536-4155-a7e1-bca42426cf0a" containerID="5a6b613cfdcd9d25d73e1e332c3337abb14983c1be8bf941d8e1abec80eb623e" exitCode=0 Jan 30 07:11:15 crc kubenswrapper[4520]: I0130 07:11:15.105775 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mpnxk" Jan 30 07:11:15 crc kubenswrapper[4520]: I0130 07:11:15.105775 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mpnxk" event={"ID":"23b44211-1536-4155-a7e1-bca42426cf0a","Type":"ContainerDied","Data":"5a6b613cfdcd9d25d73e1e332c3337abb14983c1be8bf941d8e1abec80eb623e"} Jan 30 07:11:15 crc kubenswrapper[4520]: I0130 07:11:15.106003 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mpnxk" event={"ID":"23b44211-1536-4155-a7e1-bca42426cf0a","Type":"ContainerDied","Data":"a590784ca61b5c6aa5ef99452ba9c2f846c795d72fa29f49ac2f37a7a58ae075"} Jan 30 07:11:15 crc kubenswrapper[4520]: I0130 07:11:15.106025 4520 scope.go:117] "RemoveContainer" containerID="5a6b613cfdcd9d25d73e1e332c3337abb14983c1be8bf941d8e1abec80eb623e" Jan 30 07:11:15 crc kubenswrapper[4520]: I0130 07:11:15.133482 4520 scope.go:117] "RemoveContainer" containerID="65afed645ee4a64bc1a4f1e6487999f35ffabb13e7784a7d38d3d3971a443be3" Jan 30 07:11:15 crc kubenswrapper[4520]: I0130 07:11:15.140797 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mpnxk"] Jan 30 07:11:15 crc kubenswrapper[4520]: I0130 07:11:15.146472 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mpnxk"] Jan 30 07:11:15 crc kubenswrapper[4520]: I0130 07:11:15.151593 4520 scope.go:117] "RemoveContainer" containerID="fa08e5027a9e6292b69af10b0df5e402b1165bc5bf2c057e8f7aace909ac592e" Jan 30 07:11:15 crc kubenswrapper[4520]: I0130 07:11:15.182889 4520 scope.go:117] "RemoveContainer" containerID="5a6b613cfdcd9d25d73e1e332c3337abb14983c1be8bf941d8e1abec80eb623e" Jan 30 07:11:15 crc kubenswrapper[4520]: E0130 07:11:15.183336 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a6b613cfdcd9d25d73e1e332c3337abb14983c1be8bf941d8e1abec80eb623e\": container with ID starting with 5a6b613cfdcd9d25d73e1e332c3337abb14983c1be8bf941d8e1abec80eb623e not found: ID does not exist" containerID="5a6b613cfdcd9d25d73e1e332c3337abb14983c1be8bf941d8e1abec80eb623e" Jan 30 07:11:15 crc kubenswrapper[4520]: I0130 07:11:15.183379 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a6b613cfdcd9d25d73e1e332c3337abb14983c1be8bf941d8e1abec80eb623e"} err="failed to get container status \"5a6b613cfdcd9d25d73e1e332c3337abb14983c1be8bf941d8e1abec80eb623e\": rpc error: code = NotFound desc = could not find container \"5a6b613cfdcd9d25d73e1e332c3337abb14983c1be8bf941d8e1abec80eb623e\": container with ID starting with 5a6b613cfdcd9d25d73e1e332c3337abb14983c1be8bf941d8e1abec80eb623e not found: ID does not exist" Jan 30 07:11:15 crc kubenswrapper[4520]: I0130 07:11:15.183407 4520 scope.go:117] "RemoveContainer" containerID="65afed645ee4a64bc1a4f1e6487999f35ffabb13e7784a7d38d3d3971a443be3" Jan 30 07:11:15 crc kubenswrapper[4520]: E0130 07:11:15.183693 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65afed645ee4a64bc1a4f1e6487999f35ffabb13e7784a7d38d3d3971a443be3\": container with ID starting with 65afed645ee4a64bc1a4f1e6487999f35ffabb13e7784a7d38d3d3971a443be3 not found: ID does not exist" containerID="65afed645ee4a64bc1a4f1e6487999f35ffabb13e7784a7d38d3d3971a443be3" Jan 30 07:11:15 crc kubenswrapper[4520]: I0130 07:11:15.183729 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65afed645ee4a64bc1a4f1e6487999f35ffabb13e7784a7d38d3d3971a443be3"} err="failed to get container status \"65afed645ee4a64bc1a4f1e6487999f35ffabb13e7784a7d38d3d3971a443be3\": rpc error: code = NotFound desc = could not find container \"65afed645ee4a64bc1a4f1e6487999f35ffabb13e7784a7d38d3d3971a443be3\": container with ID starting with 65afed645ee4a64bc1a4f1e6487999f35ffabb13e7784a7d38d3d3971a443be3 not found: ID does not exist" Jan 30 07:11:15 crc kubenswrapper[4520]: I0130 07:11:15.183751 4520 scope.go:117] "RemoveContainer" containerID="fa08e5027a9e6292b69af10b0df5e402b1165bc5bf2c057e8f7aace909ac592e" Jan 30 07:11:15 crc kubenswrapper[4520]: E0130 07:11:15.184024 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa08e5027a9e6292b69af10b0df5e402b1165bc5bf2c057e8f7aace909ac592e\": container with ID starting with fa08e5027a9e6292b69af10b0df5e402b1165bc5bf2c057e8f7aace909ac592e not found: ID does not exist" containerID="fa08e5027a9e6292b69af10b0df5e402b1165bc5bf2c057e8f7aace909ac592e" Jan 30 07:11:15 crc kubenswrapper[4520]: I0130 07:11:15.184049 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa08e5027a9e6292b69af10b0df5e402b1165bc5bf2c057e8f7aace909ac592e"} err="failed to get container status \"fa08e5027a9e6292b69af10b0df5e402b1165bc5bf2c057e8f7aace909ac592e\": rpc error: code = NotFound desc = could not find container \"fa08e5027a9e6292b69af10b0df5e402b1165bc5bf2c057e8f7aace909ac592e\": container with ID starting with fa08e5027a9e6292b69af10b0df5e402b1165bc5bf2c057e8f7aace909ac592e not found: ID does not exist" Jan 30 07:11:16 crc kubenswrapper[4520]: I0130 07:11:16.693359 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23b44211-1536-4155-a7e1-bca42426cf0a" path="/var/lib/kubelet/pods/23b44211-1536-4155-a7e1-bca42426cf0a/volumes" Jan 30 07:11:38 crc kubenswrapper[4520]: I0130 07:11:38.139579 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ctvr8"] Jan 30 07:11:38 crc kubenswrapper[4520]: E0130 07:11:38.140361 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23b44211-1536-4155-a7e1-bca42426cf0a" containerName="extract-utilities" Jan 30 07:11:38 crc kubenswrapper[4520]: I0130 07:11:38.140374 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="23b44211-1536-4155-a7e1-bca42426cf0a" containerName="extract-utilities" Jan 30 07:11:38 crc kubenswrapper[4520]: E0130 07:11:38.140387 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23b44211-1536-4155-a7e1-bca42426cf0a" containerName="extract-content" Jan 30 07:11:38 crc kubenswrapper[4520]: I0130 07:11:38.140393 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="23b44211-1536-4155-a7e1-bca42426cf0a" containerName="extract-content" Jan 30 07:11:38 crc kubenswrapper[4520]: E0130 07:11:38.140407 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23b44211-1536-4155-a7e1-bca42426cf0a" containerName="registry-server" Jan 30 07:11:38 crc kubenswrapper[4520]: I0130 07:11:38.140413 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="23b44211-1536-4155-a7e1-bca42426cf0a" containerName="registry-server" Jan 30 07:11:38 crc kubenswrapper[4520]: I0130 07:11:38.140638 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="23b44211-1536-4155-a7e1-bca42426cf0a" containerName="registry-server" Jan 30 07:11:38 crc kubenswrapper[4520]: I0130 07:11:38.141762 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ctvr8" Jan 30 07:11:38 crc kubenswrapper[4520]: I0130 07:11:38.155664 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ctvr8"] Jan 30 07:11:38 crc kubenswrapper[4520]: I0130 07:11:38.335716 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnnwn\" (UniqueName: \"kubernetes.io/projected/75188bce-a930-40ea-839c-6f7ca3e71d70-kube-api-access-vnnwn\") pod \"redhat-marketplace-ctvr8\" (UID: \"75188bce-a930-40ea-839c-6f7ca3e71d70\") " pod="openshift-marketplace/redhat-marketplace-ctvr8" Jan 30 07:11:38 crc kubenswrapper[4520]: I0130 07:11:38.336015 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75188bce-a930-40ea-839c-6f7ca3e71d70-catalog-content\") pod \"redhat-marketplace-ctvr8\" (UID: \"75188bce-a930-40ea-839c-6f7ca3e71d70\") " pod="openshift-marketplace/redhat-marketplace-ctvr8" Jan 30 07:11:38 crc kubenswrapper[4520]: I0130 07:11:38.336249 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75188bce-a930-40ea-839c-6f7ca3e71d70-utilities\") pod \"redhat-marketplace-ctvr8\" (UID: \"75188bce-a930-40ea-839c-6f7ca3e71d70\") " pod="openshift-marketplace/redhat-marketplace-ctvr8" Jan 30 07:11:38 crc kubenswrapper[4520]: I0130 07:11:38.437803 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75188bce-a930-40ea-839c-6f7ca3e71d70-utilities\") pod \"redhat-marketplace-ctvr8\" (UID: \"75188bce-a930-40ea-839c-6f7ca3e71d70\") " pod="openshift-marketplace/redhat-marketplace-ctvr8" Jan 30 07:11:38 crc kubenswrapper[4520]: I0130 07:11:38.437880 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnnwn\" (UniqueName: \"kubernetes.io/projected/75188bce-a930-40ea-839c-6f7ca3e71d70-kube-api-access-vnnwn\") pod \"redhat-marketplace-ctvr8\" (UID: \"75188bce-a930-40ea-839c-6f7ca3e71d70\") " pod="openshift-marketplace/redhat-marketplace-ctvr8" Jan 30 07:11:38 crc kubenswrapper[4520]: I0130 07:11:38.437979 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75188bce-a930-40ea-839c-6f7ca3e71d70-catalog-content\") pod \"redhat-marketplace-ctvr8\" (UID: \"75188bce-a930-40ea-839c-6f7ca3e71d70\") " pod="openshift-marketplace/redhat-marketplace-ctvr8" Jan 30 07:11:38 crc kubenswrapper[4520]: I0130 07:11:38.438314 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75188bce-a930-40ea-839c-6f7ca3e71d70-utilities\") pod \"redhat-marketplace-ctvr8\" (UID: \"75188bce-a930-40ea-839c-6f7ca3e71d70\") " pod="openshift-marketplace/redhat-marketplace-ctvr8" Jan 30 07:11:38 crc kubenswrapper[4520]: I0130 07:11:38.438370 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75188bce-a930-40ea-839c-6f7ca3e71d70-catalog-content\") pod \"redhat-marketplace-ctvr8\" (UID: \"75188bce-a930-40ea-839c-6f7ca3e71d70\") " pod="openshift-marketplace/redhat-marketplace-ctvr8" Jan 30 07:11:38 crc kubenswrapper[4520]: I0130 07:11:38.456081 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnnwn\" (UniqueName: \"kubernetes.io/projected/75188bce-a930-40ea-839c-6f7ca3e71d70-kube-api-access-vnnwn\") pod \"redhat-marketplace-ctvr8\" (UID: \"75188bce-a930-40ea-839c-6f7ca3e71d70\") " pod="openshift-marketplace/redhat-marketplace-ctvr8" Jan 30 07:11:38 crc kubenswrapper[4520]: I0130 07:11:38.456502 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ctvr8" Jan 30 07:11:38 crc kubenswrapper[4520]: I0130 07:11:38.912147 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ctvr8"] Jan 30 07:11:39 crc kubenswrapper[4520]: I0130 07:11:39.261761 4520 generic.go:334] "Generic (PLEG): container finished" podID="75188bce-a930-40ea-839c-6f7ca3e71d70" containerID="5a1b374db56fc2c79f3dc40d5f5e0781e0be0f6422a912d6dde34b42c20f4436" exitCode=0 Jan 30 07:11:39 crc kubenswrapper[4520]: I0130 07:11:39.261835 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ctvr8" event={"ID":"75188bce-a930-40ea-839c-6f7ca3e71d70","Type":"ContainerDied","Data":"5a1b374db56fc2c79f3dc40d5f5e0781e0be0f6422a912d6dde34b42c20f4436"} Jan 30 07:11:39 crc kubenswrapper[4520]: I0130 07:11:39.262039 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ctvr8" event={"ID":"75188bce-a930-40ea-839c-6f7ca3e71d70","Type":"ContainerStarted","Data":"04801e8992b3241e69c2a917a30fa5ff26d0ad43fe8c646ebb6b32b869980dba"} Jan 30 07:11:40 crc kubenswrapper[4520]: I0130 07:11:40.270046 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ctvr8" event={"ID":"75188bce-a930-40ea-839c-6f7ca3e71d70","Type":"ContainerStarted","Data":"fc0d378106f8fa64770024bb1c3115717625b968362c924f78f1cef035936ca2"} Jan 30 07:11:41 crc kubenswrapper[4520]: I0130 07:11:41.278119 4520 generic.go:334] "Generic (PLEG): container finished" podID="75188bce-a930-40ea-839c-6f7ca3e71d70" containerID="fc0d378106f8fa64770024bb1c3115717625b968362c924f78f1cef035936ca2" exitCode=0 Jan 30 07:11:41 crc kubenswrapper[4520]: I0130 07:11:41.278159 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ctvr8" event={"ID":"75188bce-a930-40ea-839c-6f7ca3e71d70","Type":"ContainerDied","Data":"fc0d378106f8fa64770024bb1c3115717625b968362c924f78f1cef035936ca2"} Jan 30 07:11:42 crc kubenswrapper[4520]: I0130 07:11:42.284969 4520 generic.go:334] "Generic (PLEG): container finished" podID="ce5ace07-4153-4a1d-b920-18f0d97db7ac" containerID="ebdb3c7ffd749b5f67dd266c4ea9f01aab8b7c3cffd6cfaf5687ac4fa10411f5" exitCode=0 Jan 30 07:11:42 crc kubenswrapper[4520]: I0130 07:11:42.285037 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kf7fw" event={"ID":"ce5ace07-4153-4a1d-b920-18f0d97db7ac","Type":"ContainerDied","Data":"ebdb3c7ffd749b5f67dd266c4ea9f01aab8b7c3cffd6cfaf5687ac4fa10411f5"} Jan 30 07:11:42 crc kubenswrapper[4520]: I0130 07:11:42.287508 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ctvr8" event={"ID":"75188bce-a930-40ea-839c-6f7ca3e71d70","Type":"ContainerStarted","Data":"68c100c43cd816e5107b7eed4e55d798503b1803f1378c0201d4906e4b23c067"} Jan 30 07:11:42 crc kubenswrapper[4520]: I0130 07:11:42.316624 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ctvr8" podStartSLOduration=1.7890842839999999 podStartE2EDuration="4.316605056s" podCreationTimestamp="2026-01-30 07:11:38 +0000 UTC" firstStartedPulling="2026-01-30 07:11:39.263489336 +0000 UTC m=+1612.891841517" lastFinishedPulling="2026-01-30 07:11:41.791010108 +0000 UTC m=+1615.419362289" observedRunningTime="2026-01-30 07:11:42.309635822 +0000 UTC m=+1615.937988003" watchObservedRunningTime="2026-01-30 07:11:42.316605056 +0000 UTC m=+1615.944957237" Jan 30 07:11:43 crc kubenswrapper[4520]: I0130 07:11:43.038349 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-f0fc-account-create-update-r729j"] Jan 30 07:11:43 crc kubenswrapper[4520]: I0130 07:11:43.043962 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-328b-account-create-update-gh8sv"] Jan 30 07:11:43 crc kubenswrapper[4520]: I0130 07:11:43.051564 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-f0fc-account-create-update-r729j"] Jan 30 07:11:43 crc kubenswrapper[4520]: I0130 07:11:43.056571 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-006f-account-create-update-xbt7p"] Jan 30 07:11:43 crc kubenswrapper[4520]: I0130 07:11:43.062129 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-d2pnr"] Jan 30 07:11:43 crc kubenswrapper[4520]: I0130 07:11:43.069657 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-kbm45"] Jan 30 07:11:43 crc kubenswrapper[4520]: I0130 07:11:43.085866 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-8dnb5"] Jan 30 07:11:43 crc kubenswrapper[4520]: I0130 07:11:43.090939 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-d2pnr"] Jan 30 07:11:43 crc kubenswrapper[4520]: I0130 07:11:43.099721 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-328b-account-create-update-gh8sv"] Jan 30 07:11:43 crc kubenswrapper[4520]: I0130 07:11:43.104949 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-kbm45"] Jan 30 07:11:43 crc kubenswrapper[4520]: I0130 07:11:43.112071 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-006f-account-create-update-xbt7p"] Jan 30 07:11:43 crc kubenswrapper[4520]: I0130 07:11:43.117718 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-8dnb5"] Jan 30 07:11:43 crc kubenswrapper[4520]: I0130 07:11:43.605413 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kf7fw" Jan 30 07:11:43 crc kubenswrapper[4520]: I0130 07:11:43.624084 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce5ace07-4153-4a1d-b920-18f0d97db7ac-inventory\") pod \"ce5ace07-4153-4a1d-b920-18f0d97db7ac\" (UID: \"ce5ace07-4153-4a1d-b920-18f0d97db7ac\") " Jan 30 07:11:43 crc kubenswrapper[4520]: I0130 07:11:43.624249 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce5ace07-4153-4a1d-b920-18f0d97db7ac-ssh-key-openstack-edpm-ipam\") pod \"ce5ace07-4153-4a1d-b920-18f0d97db7ac\" (UID: \"ce5ace07-4153-4a1d-b920-18f0d97db7ac\") " Jan 30 07:11:43 crc kubenswrapper[4520]: I0130 07:11:43.624315 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf6fd\" (UniqueName: \"kubernetes.io/projected/ce5ace07-4153-4a1d-b920-18f0d97db7ac-kube-api-access-bf6fd\") pod \"ce5ace07-4153-4a1d-b920-18f0d97db7ac\" (UID: \"ce5ace07-4153-4a1d-b920-18f0d97db7ac\") " Jan 30 07:11:43 crc kubenswrapper[4520]: I0130 07:11:43.630877 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce5ace07-4153-4a1d-b920-18f0d97db7ac-kube-api-access-bf6fd" (OuterVolumeSpecName: "kube-api-access-bf6fd") pod "ce5ace07-4153-4a1d-b920-18f0d97db7ac" (UID: "ce5ace07-4153-4a1d-b920-18f0d97db7ac"). InnerVolumeSpecName "kube-api-access-bf6fd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:11:43 crc kubenswrapper[4520]: I0130 07:11:43.678813 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce5ace07-4153-4a1d-b920-18f0d97db7ac-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ce5ace07-4153-4a1d-b920-18f0d97db7ac" (UID: "ce5ace07-4153-4a1d-b920-18f0d97db7ac"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:11:43 crc kubenswrapper[4520]: I0130 07:11:43.680192 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce5ace07-4153-4a1d-b920-18f0d97db7ac-inventory" (OuterVolumeSpecName: "inventory") pod "ce5ace07-4153-4a1d-b920-18f0d97db7ac" (UID: "ce5ace07-4153-4a1d-b920-18f0d97db7ac"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:11:43 crc kubenswrapper[4520]: I0130 07:11:43.725880 4520 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce5ace07-4153-4a1d-b920-18f0d97db7ac-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 07:11:43 crc kubenswrapper[4520]: I0130 07:11:43.725968 4520 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce5ace07-4153-4a1d-b920-18f0d97db7ac-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 07:11:43 crc kubenswrapper[4520]: I0130 07:11:43.726022 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf6fd\" (UniqueName: \"kubernetes.io/projected/ce5ace07-4153-4a1d-b920-18f0d97db7ac-kube-api-access-bf6fd\") on node \"crc\" DevicePath \"\"" Jan 30 07:11:44 crc kubenswrapper[4520]: I0130 07:11:44.303690 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kf7fw" event={"ID":"ce5ace07-4153-4a1d-b920-18f0d97db7ac","Type":"ContainerDied","Data":"ee990f2c1bb55650d57cb2f7bca6c278f539b7045365bce30ced08128536641e"} Jan 30 07:11:44 crc kubenswrapper[4520]: I0130 07:11:44.303742 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kf7fw" Jan 30 07:11:44 crc kubenswrapper[4520]: I0130 07:11:44.303746 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee990f2c1bb55650d57cb2f7bca6c278f539b7045365bce30ced08128536641e" Jan 30 07:11:44 crc kubenswrapper[4520]: I0130 07:11:44.382107 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2hkdm"] Jan 30 07:11:44 crc kubenswrapper[4520]: E0130 07:11:44.382527 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce5ace07-4153-4a1d-b920-18f0d97db7ac" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 30 07:11:44 crc kubenswrapper[4520]: I0130 07:11:44.382547 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce5ace07-4153-4a1d-b920-18f0d97db7ac" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 30 07:11:44 crc kubenswrapper[4520]: I0130 07:11:44.382752 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce5ace07-4153-4a1d-b920-18f0d97db7ac" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 30 07:11:44 crc kubenswrapper[4520]: I0130 07:11:44.383426 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2hkdm" Jan 30 07:11:44 crc kubenswrapper[4520]: I0130 07:11:44.386628 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 07:11:44 crc kubenswrapper[4520]: I0130 07:11:44.388742 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 07:11:44 crc kubenswrapper[4520]: I0130 07:11:44.389759 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r7s58" Jan 30 07:11:44 crc kubenswrapper[4520]: I0130 07:11:44.394082 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2hkdm"] Jan 30 07:11:44 crc kubenswrapper[4520]: I0130 07:11:44.395356 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 07:11:44 crc kubenswrapper[4520]: I0130 07:11:44.536852 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62a66b34-ff8e-4525-b8bf-6113f2dd4d56-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2hkdm\" (UID: \"62a66b34-ff8e-4525-b8bf-6113f2dd4d56\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2hkdm" Jan 30 07:11:44 crc kubenswrapper[4520]: I0130 07:11:44.537009 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz98h\" (UniqueName: \"kubernetes.io/projected/62a66b34-ff8e-4525-b8bf-6113f2dd4d56-kube-api-access-tz98h\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2hkdm\" (UID: \"62a66b34-ff8e-4525-b8bf-6113f2dd4d56\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2hkdm" Jan 30 07:11:44 crc kubenswrapper[4520]: I0130 07:11:44.537052 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62a66b34-ff8e-4525-b8bf-6113f2dd4d56-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2hkdm\" (UID: \"62a66b34-ff8e-4525-b8bf-6113f2dd4d56\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2hkdm" Jan 30 07:11:44 crc kubenswrapper[4520]: I0130 07:11:44.640310 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62a66b34-ff8e-4525-b8bf-6113f2dd4d56-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2hkdm\" (UID: \"62a66b34-ff8e-4525-b8bf-6113f2dd4d56\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2hkdm" Jan 30 07:11:44 crc kubenswrapper[4520]: I0130 07:11:44.640401 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62a66b34-ff8e-4525-b8bf-6113f2dd4d56-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2hkdm\" (UID: \"62a66b34-ff8e-4525-b8bf-6113f2dd4d56\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2hkdm" Jan 30 07:11:44 crc kubenswrapper[4520]: I0130 07:11:44.640542 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz98h\" (UniqueName: \"kubernetes.io/projected/62a66b34-ff8e-4525-b8bf-6113f2dd4d56-kube-api-access-tz98h\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2hkdm\" (UID: \"62a66b34-ff8e-4525-b8bf-6113f2dd4d56\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2hkdm" Jan 30 07:11:44 crc kubenswrapper[4520]: I0130 07:11:44.645231 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62a66b34-ff8e-4525-b8bf-6113f2dd4d56-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2hkdm\" (UID: \"62a66b34-ff8e-4525-b8bf-6113f2dd4d56\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2hkdm" Jan 30 07:11:44 crc kubenswrapper[4520]: I0130 07:11:44.650401 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62a66b34-ff8e-4525-b8bf-6113f2dd4d56-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2hkdm\" (UID: \"62a66b34-ff8e-4525-b8bf-6113f2dd4d56\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2hkdm" Jan 30 07:11:44 crc kubenswrapper[4520]: I0130 07:11:44.661676 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz98h\" (UniqueName: \"kubernetes.io/projected/62a66b34-ff8e-4525-b8bf-6113f2dd4d56-kube-api-access-tz98h\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2hkdm\" (UID: \"62a66b34-ff8e-4525-b8bf-6113f2dd4d56\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2hkdm" Jan 30 07:11:44 crc kubenswrapper[4520]: I0130 07:11:44.696861 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="161e12d5-a5ef-44fd-ae2c-7b76c39202eb" path="/var/lib/kubelet/pods/161e12d5-a5ef-44fd-ae2c-7b76c39202eb/volumes" Jan 30 07:11:44 crc kubenswrapper[4520]: I0130 07:11:44.698435 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="495b2abf-5190-409c-85b0-e6dcbef9ceaf" path="/var/lib/kubelet/pods/495b2abf-5190-409c-85b0-e6dcbef9ceaf/volumes" Jan 30 07:11:44 crc kubenswrapper[4520]: I0130 07:11:44.700264 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fa64730-e3ba-46af-8849-f4f38170b71a" path="/var/lib/kubelet/pods/4fa64730-e3ba-46af-8849-f4f38170b71a/volumes" Jan 30 07:11:44 crc kubenswrapper[4520]: I0130 07:11:44.700865 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2hkdm" Jan 30 07:11:44 crc kubenswrapper[4520]: I0130 07:11:44.701876 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b22ec224-5158-47c9-bb3e-70a93147f671" path="/var/lib/kubelet/pods/b22ec224-5158-47c9-bb3e-70a93147f671/volumes" Jan 30 07:11:44 crc kubenswrapper[4520]: I0130 07:11:44.703369 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dac5b11b-fec5-471b-8b93-5755e9adbf6a" path="/var/lib/kubelet/pods/dac5b11b-fec5-471b-8b93-5755e9adbf6a/volumes" Jan 30 07:11:44 crc kubenswrapper[4520]: I0130 07:11:44.704628 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efc58f00-bc50-41a3-ae74-2ed020c0ac1a" path="/var/lib/kubelet/pods/efc58f00-bc50-41a3-ae74-2ed020c0ac1a/volumes" Jan 30 07:11:45 crc kubenswrapper[4520]: I0130 07:11:45.206459 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2hkdm"] Jan 30 07:11:45 crc kubenswrapper[4520]: I0130 07:11:45.328626 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2hkdm" event={"ID":"62a66b34-ff8e-4525-b8bf-6113f2dd4d56","Type":"ContainerStarted","Data":"205b40fcbcd1639fd28b2103b1d06f6dc11d463e989573268563ca437235795e"} Jan 30 07:11:46 crc kubenswrapper[4520]: I0130 07:11:46.336981 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2hkdm" event={"ID":"62a66b34-ff8e-4525-b8bf-6113f2dd4d56","Type":"ContainerStarted","Data":"21449ebc116b3ae2c0eaf7eedd333e52ea7b2c5a5d46b15e08ad43c0f39e41f4"} Jan 30 07:11:46 crc kubenswrapper[4520]: I0130 07:11:46.361491 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2hkdm" podStartSLOduration=1.851228389 podStartE2EDuration="2.361473025s" podCreationTimestamp="2026-01-30 07:11:44 +0000 UTC" firstStartedPulling="2026-01-30 07:11:45.227608668 +0000 UTC m=+1618.855960849" lastFinishedPulling="2026-01-30 07:11:45.737853303 +0000 UTC m=+1619.366205485" observedRunningTime="2026-01-30 07:11:46.357676596 +0000 UTC m=+1619.986028777" watchObservedRunningTime="2026-01-30 07:11:46.361473025 +0000 UTC m=+1619.989825206" Jan 30 07:11:48 crc kubenswrapper[4520]: I0130 07:11:48.458011 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ctvr8" Jan 30 07:11:48 crc kubenswrapper[4520]: I0130 07:11:48.458449 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ctvr8" Jan 30 07:11:48 crc kubenswrapper[4520]: I0130 07:11:48.497064 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ctvr8" Jan 30 07:11:49 crc kubenswrapper[4520]: I0130 07:11:49.393082 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ctvr8" Jan 30 07:11:49 crc kubenswrapper[4520]: I0130 07:11:49.440445 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ctvr8"] Jan 30 07:11:51 crc kubenswrapper[4520]: I0130 07:11:51.372936 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ctvr8" podUID="75188bce-a930-40ea-839c-6f7ca3e71d70" containerName="registry-server" containerID="cri-o://68c100c43cd816e5107b7eed4e55d798503b1803f1378c0201d4906e4b23c067" gracePeriod=2 Jan 30 07:11:51 crc kubenswrapper[4520]: I0130 07:11:51.765814 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ctvr8" Jan 30 07:11:51 crc kubenswrapper[4520]: I0130 07:11:51.889610 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnnwn\" (UniqueName: \"kubernetes.io/projected/75188bce-a930-40ea-839c-6f7ca3e71d70-kube-api-access-vnnwn\") pod \"75188bce-a930-40ea-839c-6f7ca3e71d70\" (UID: \"75188bce-a930-40ea-839c-6f7ca3e71d70\") " Jan 30 07:11:51 crc kubenswrapper[4520]: I0130 07:11:51.889835 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75188bce-a930-40ea-839c-6f7ca3e71d70-catalog-content\") pod \"75188bce-a930-40ea-839c-6f7ca3e71d70\" (UID: \"75188bce-a930-40ea-839c-6f7ca3e71d70\") " Jan 30 07:11:51 crc kubenswrapper[4520]: I0130 07:11:51.890049 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75188bce-a930-40ea-839c-6f7ca3e71d70-utilities\") pod \"75188bce-a930-40ea-839c-6f7ca3e71d70\" (UID: \"75188bce-a930-40ea-839c-6f7ca3e71d70\") " Jan 30 07:11:51 crc kubenswrapper[4520]: I0130 07:11:51.890551 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75188bce-a930-40ea-839c-6f7ca3e71d70-utilities" (OuterVolumeSpecName: "utilities") pod "75188bce-a930-40ea-839c-6f7ca3e71d70" (UID: "75188bce-a930-40ea-839c-6f7ca3e71d70"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:11:51 crc kubenswrapper[4520]: I0130 07:11:51.895540 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75188bce-a930-40ea-839c-6f7ca3e71d70-kube-api-access-vnnwn" (OuterVolumeSpecName: "kube-api-access-vnnwn") pod "75188bce-a930-40ea-839c-6f7ca3e71d70" (UID: "75188bce-a930-40ea-839c-6f7ca3e71d70"). InnerVolumeSpecName "kube-api-access-vnnwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:11:51 crc kubenswrapper[4520]: I0130 07:11:51.903955 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75188bce-a930-40ea-839c-6f7ca3e71d70-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "75188bce-a930-40ea-839c-6f7ca3e71d70" (UID: "75188bce-a930-40ea-839c-6f7ca3e71d70"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:11:51 crc kubenswrapper[4520]: I0130 07:11:51.992691 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75188bce-a930-40ea-839c-6f7ca3e71d70-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 07:11:51 crc kubenswrapper[4520]: I0130 07:11:51.992835 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnnwn\" (UniqueName: \"kubernetes.io/projected/75188bce-a930-40ea-839c-6f7ca3e71d70-kube-api-access-vnnwn\") on node \"crc\" DevicePath \"\"" Jan 30 07:11:51 crc kubenswrapper[4520]: I0130 07:11:51.992901 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75188bce-a930-40ea-839c-6f7ca3e71d70-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 07:11:52 crc kubenswrapper[4520]: I0130 07:11:52.384452 4520 generic.go:334] "Generic (PLEG): container finished" podID="75188bce-a930-40ea-839c-6f7ca3e71d70" containerID="68c100c43cd816e5107b7eed4e55d798503b1803f1378c0201d4906e4b23c067" exitCode=0 Jan 30 07:11:52 crc kubenswrapper[4520]: I0130 07:11:52.384496 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ctvr8" event={"ID":"75188bce-a930-40ea-839c-6f7ca3e71d70","Type":"ContainerDied","Data":"68c100c43cd816e5107b7eed4e55d798503b1803f1378c0201d4906e4b23c067"} Jan 30 07:11:52 crc kubenswrapper[4520]: I0130 07:11:52.384544 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ctvr8" event={"ID":"75188bce-a930-40ea-839c-6f7ca3e71d70","Type":"ContainerDied","Data":"04801e8992b3241e69c2a917a30fa5ff26d0ad43fe8c646ebb6b32b869980dba"} Jan 30 07:11:52 crc kubenswrapper[4520]: I0130 07:11:52.384568 4520 scope.go:117] "RemoveContainer" containerID="68c100c43cd816e5107b7eed4e55d798503b1803f1378c0201d4906e4b23c067" Jan 30 07:11:52 crc kubenswrapper[4520]: I0130 07:11:52.386014 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ctvr8" Jan 30 07:11:52 crc kubenswrapper[4520]: I0130 07:11:52.409826 4520 scope.go:117] "RemoveContainer" containerID="fc0d378106f8fa64770024bb1c3115717625b968362c924f78f1cef035936ca2" Jan 30 07:11:52 crc kubenswrapper[4520]: I0130 07:11:52.424665 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ctvr8"] Jan 30 07:11:52 crc kubenswrapper[4520]: I0130 07:11:52.439092 4520 scope.go:117] "RemoveContainer" containerID="5a1b374db56fc2c79f3dc40d5f5e0781e0be0f6422a912d6dde34b42c20f4436" Jan 30 07:11:52 crc kubenswrapper[4520]: I0130 07:11:52.454860 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ctvr8"] Jan 30 07:11:52 crc kubenswrapper[4520]: I0130 07:11:52.472733 4520 scope.go:117] "RemoveContainer" containerID="68c100c43cd816e5107b7eed4e55d798503b1803f1378c0201d4906e4b23c067" Jan 30 07:11:52 crc kubenswrapper[4520]: E0130 07:11:52.473089 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68c100c43cd816e5107b7eed4e55d798503b1803f1378c0201d4906e4b23c067\": container with ID starting with 68c100c43cd816e5107b7eed4e55d798503b1803f1378c0201d4906e4b23c067 not found: ID does not exist" containerID="68c100c43cd816e5107b7eed4e55d798503b1803f1378c0201d4906e4b23c067" Jan 30 07:11:52 crc kubenswrapper[4520]: I0130 07:11:52.473137 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68c100c43cd816e5107b7eed4e55d798503b1803f1378c0201d4906e4b23c067"} err="failed to get container status \"68c100c43cd816e5107b7eed4e55d798503b1803f1378c0201d4906e4b23c067\": rpc error: code = NotFound desc = could not find container \"68c100c43cd816e5107b7eed4e55d798503b1803f1378c0201d4906e4b23c067\": container with ID starting with 68c100c43cd816e5107b7eed4e55d798503b1803f1378c0201d4906e4b23c067 not found: ID does not exist" Jan 30 07:11:52 crc kubenswrapper[4520]: I0130 07:11:52.473166 4520 scope.go:117] "RemoveContainer" containerID="fc0d378106f8fa64770024bb1c3115717625b968362c924f78f1cef035936ca2" Jan 30 07:11:52 crc kubenswrapper[4520]: E0130 07:11:52.473421 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc0d378106f8fa64770024bb1c3115717625b968362c924f78f1cef035936ca2\": container with ID starting with fc0d378106f8fa64770024bb1c3115717625b968362c924f78f1cef035936ca2 not found: ID does not exist" containerID="fc0d378106f8fa64770024bb1c3115717625b968362c924f78f1cef035936ca2" Jan 30 07:11:52 crc kubenswrapper[4520]: I0130 07:11:52.473444 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc0d378106f8fa64770024bb1c3115717625b968362c924f78f1cef035936ca2"} err="failed to get container status \"fc0d378106f8fa64770024bb1c3115717625b968362c924f78f1cef035936ca2\": rpc error: code = NotFound desc = could not find container \"fc0d378106f8fa64770024bb1c3115717625b968362c924f78f1cef035936ca2\": container with ID starting with fc0d378106f8fa64770024bb1c3115717625b968362c924f78f1cef035936ca2 not found: ID does not exist" Jan 30 07:11:52 crc kubenswrapper[4520]: I0130 07:11:52.473466 4520 scope.go:117] "RemoveContainer" containerID="5a1b374db56fc2c79f3dc40d5f5e0781e0be0f6422a912d6dde34b42c20f4436" Jan 30 07:11:52 crc kubenswrapper[4520]: E0130 07:11:52.473784 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a1b374db56fc2c79f3dc40d5f5e0781e0be0f6422a912d6dde34b42c20f4436\": container with ID starting with 5a1b374db56fc2c79f3dc40d5f5e0781e0be0f6422a912d6dde34b42c20f4436 not found: ID does not exist" containerID="5a1b374db56fc2c79f3dc40d5f5e0781e0be0f6422a912d6dde34b42c20f4436" Jan 30 07:11:52 crc kubenswrapper[4520]: I0130 07:11:52.473869 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a1b374db56fc2c79f3dc40d5f5e0781e0be0f6422a912d6dde34b42c20f4436"} err="failed to get container status \"5a1b374db56fc2c79f3dc40d5f5e0781e0be0f6422a912d6dde34b42c20f4436\": rpc error: code = NotFound desc = could not find container \"5a1b374db56fc2c79f3dc40d5f5e0781e0be0f6422a912d6dde34b42c20f4436\": container with ID starting with 5a1b374db56fc2c79f3dc40d5f5e0781e0be0f6422a912d6dde34b42c20f4436 not found: ID does not exist" Jan 30 07:11:52 crc kubenswrapper[4520]: I0130 07:11:52.695134 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75188bce-a930-40ea-839c-6f7ca3e71d70" path="/var/lib/kubelet/pods/75188bce-a930-40ea-839c-6f7ca3e71d70/volumes" Jan 30 07:11:55 crc kubenswrapper[4520]: I0130 07:11:55.446070 4520 scope.go:117] "RemoveContainer" containerID="5e08fba8ff43657b3eec2fabb594b131c029cb860ee9f0d6053de5ae0b8b3bc3" Jan 30 07:11:55 crc kubenswrapper[4520]: I0130 07:11:55.466289 4520 scope.go:117] "RemoveContainer" containerID="31e05604b9996f0edc15611d6cbc37f4a9c70393866a096aefd9b99e7057c7e9" Jan 30 07:11:55 crc kubenswrapper[4520]: I0130 07:11:55.507970 4520 scope.go:117] "RemoveContainer" containerID="638836148d5e03844d89be689272833795cba79bfcf01cd5b104693429f721c6" Jan 30 07:11:55 crc kubenswrapper[4520]: I0130 07:11:55.533569 4520 scope.go:117] "RemoveContainer" containerID="3060b06c3e9bfb4124e00440650a276b127207c3ebb47c4e79baea7996cee5a0" Jan 30 07:11:55 crc kubenswrapper[4520]: I0130 07:11:55.575186 4520 scope.go:117] "RemoveContainer" containerID="8c2a47b934cd7fcb72c7ebaab7afee2f34e2c9ffce80e8f1cb99669cbb0bb412" Jan 30 07:11:55 crc kubenswrapper[4520]: I0130 07:11:55.612735 4520 scope.go:117] "RemoveContainer" containerID="648d00aa9bf7d684cae40e3f61a3761b2af70ff04edff4dd1f6278577713ff57" Jan 30 07:11:55 crc kubenswrapper[4520]: I0130 07:11:55.646946 4520 scope.go:117] "RemoveContainer" containerID="fc36abad5343603ac251f9b179313f05a7b50a287be29781953fa4cbec0660f8" Jan 30 07:11:55 crc kubenswrapper[4520]: I0130 07:11:55.664801 4520 scope.go:117] "RemoveContainer" containerID="5c17e12724ac3abe596329d4b8fe0e2cf1d706e4940192903c766a0b189667ac" Jan 30 07:11:55 crc kubenswrapper[4520]: I0130 07:11:55.680599 4520 scope.go:117] "RemoveContainer" containerID="2badfe22aa20192f0acd3b1e47d272ec04921189361d6fb7c7e4b9b7da91ed9f" Jan 30 07:11:57 crc kubenswrapper[4520]: I0130 07:11:57.793122 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 07:11:57 crc kubenswrapper[4520]: I0130 07:11:57.794087 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 07:12:22 crc kubenswrapper[4520]: I0130 07:12:22.029386 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cvbx4"] Jan 30 07:12:22 crc kubenswrapper[4520]: I0130 07:12:22.034189 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cvbx4"] Jan 30 07:12:22 crc kubenswrapper[4520]: I0130 07:12:22.694990 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12c4f381-2282-4c3c-8735-8862b07e65dc" path="/var/lib/kubelet/pods/12c4f381-2282-4c3c-8735-8862b07e65dc/volumes" Jan 30 07:12:27 crc kubenswrapper[4520]: I0130 07:12:27.793233 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 07:12:27 crc kubenswrapper[4520]: I0130 07:12:27.793431 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 07:12:52 crc kubenswrapper[4520]: I0130 07:12:52.783856 4520 generic.go:334] "Generic (PLEG): container finished" podID="62a66b34-ff8e-4525-b8bf-6113f2dd4d56" containerID="21449ebc116b3ae2c0eaf7eedd333e52ea7b2c5a5d46b15e08ad43c0f39e41f4" exitCode=0 Jan 30 07:12:52 crc kubenswrapper[4520]: I0130 07:12:52.783945 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2hkdm" event={"ID":"62a66b34-ff8e-4525-b8bf-6113f2dd4d56","Type":"ContainerDied","Data":"21449ebc116b3ae2c0eaf7eedd333e52ea7b2c5a5d46b15e08ad43c0f39e41f4"} Jan 30 07:12:54 crc kubenswrapper[4520]: I0130 07:12:54.119990 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2hkdm" Jan 30 07:12:54 crc kubenswrapper[4520]: I0130 07:12:54.244686 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tz98h\" (UniqueName: \"kubernetes.io/projected/62a66b34-ff8e-4525-b8bf-6113f2dd4d56-kube-api-access-tz98h\") pod \"62a66b34-ff8e-4525-b8bf-6113f2dd4d56\" (UID: \"62a66b34-ff8e-4525-b8bf-6113f2dd4d56\") " Jan 30 07:12:54 crc kubenswrapper[4520]: I0130 07:12:54.244782 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62a66b34-ff8e-4525-b8bf-6113f2dd4d56-inventory\") pod \"62a66b34-ff8e-4525-b8bf-6113f2dd4d56\" (UID: \"62a66b34-ff8e-4525-b8bf-6113f2dd4d56\") " Jan 30 07:12:54 crc kubenswrapper[4520]: I0130 07:12:54.244817 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62a66b34-ff8e-4525-b8bf-6113f2dd4d56-ssh-key-openstack-edpm-ipam\") pod \"62a66b34-ff8e-4525-b8bf-6113f2dd4d56\" (UID: \"62a66b34-ff8e-4525-b8bf-6113f2dd4d56\") " Jan 30 07:12:54 crc kubenswrapper[4520]: I0130 07:12:54.249002 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62a66b34-ff8e-4525-b8bf-6113f2dd4d56-kube-api-access-tz98h" (OuterVolumeSpecName: "kube-api-access-tz98h") pod "62a66b34-ff8e-4525-b8bf-6113f2dd4d56" (UID: "62a66b34-ff8e-4525-b8bf-6113f2dd4d56"). InnerVolumeSpecName "kube-api-access-tz98h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:12:54 crc kubenswrapper[4520]: I0130 07:12:54.266868 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a66b34-ff8e-4525-b8bf-6113f2dd4d56-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "62a66b34-ff8e-4525-b8bf-6113f2dd4d56" (UID: "62a66b34-ff8e-4525-b8bf-6113f2dd4d56"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:12:54 crc kubenswrapper[4520]: I0130 07:12:54.267326 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a66b34-ff8e-4525-b8bf-6113f2dd4d56-inventory" (OuterVolumeSpecName: "inventory") pod "62a66b34-ff8e-4525-b8bf-6113f2dd4d56" (UID: "62a66b34-ff8e-4525-b8bf-6113f2dd4d56"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:12:54 crc kubenswrapper[4520]: I0130 07:12:54.346154 4520 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62a66b34-ff8e-4525-b8bf-6113f2dd4d56-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 07:12:54 crc kubenswrapper[4520]: I0130 07:12:54.346183 4520 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62a66b34-ff8e-4525-b8bf-6113f2dd4d56-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 07:12:54 crc kubenswrapper[4520]: I0130 07:12:54.346194 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tz98h\" (UniqueName: \"kubernetes.io/projected/62a66b34-ff8e-4525-b8bf-6113f2dd4d56-kube-api-access-tz98h\") on node \"crc\" DevicePath \"\"" Jan 30 07:12:54 crc kubenswrapper[4520]: I0130 07:12:54.798821 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2hkdm" event={"ID":"62a66b34-ff8e-4525-b8bf-6113f2dd4d56","Type":"ContainerDied","Data":"205b40fcbcd1639fd28b2103b1d06f6dc11d463e989573268563ca437235795e"} Jan 30 07:12:54 crc kubenswrapper[4520]: I0130 07:12:54.798865 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="205b40fcbcd1639fd28b2103b1d06f6dc11d463e989573268563ca437235795e" Jan 30 07:12:54 crc kubenswrapper[4520]: I0130 07:12:54.798874 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2hkdm" Jan 30 07:12:54 crc kubenswrapper[4520]: I0130 07:12:54.879806 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bswss"] Jan 30 07:12:54 crc kubenswrapper[4520]: E0130 07:12:54.880107 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a66b34-ff8e-4525-b8bf-6113f2dd4d56" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 30 07:12:54 crc kubenswrapper[4520]: I0130 07:12:54.880126 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a66b34-ff8e-4525-b8bf-6113f2dd4d56" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 30 07:12:54 crc kubenswrapper[4520]: E0130 07:12:54.880141 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75188bce-a930-40ea-839c-6f7ca3e71d70" containerName="extract-content" Jan 30 07:12:54 crc kubenswrapper[4520]: I0130 07:12:54.880149 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="75188bce-a930-40ea-839c-6f7ca3e71d70" containerName="extract-content" Jan 30 07:12:54 crc kubenswrapper[4520]: E0130 07:12:54.880157 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75188bce-a930-40ea-839c-6f7ca3e71d70" containerName="extract-utilities" Jan 30 07:12:54 crc kubenswrapper[4520]: I0130 07:12:54.880162 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="75188bce-a930-40ea-839c-6f7ca3e71d70" containerName="extract-utilities" Jan 30 07:12:54 crc kubenswrapper[4520]: E0130 07:12:54.880177 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75188bce-a930-40ea-839c-6f7ca3e71d70" containerName="registry-server" Jan 30 07:12:54 crc kubenswrapper[4520]: I0130 07:12:54.880182 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="75188bce-a930-40ea-839c-6f7ca3e71d70" containerName="registry-server" Jan 30 07:12:54 crc kubenswrapper[4520]: I0130 07:12:54.880336 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="62a66b34-ff8e-4525-b8bf-6113f2dd4d56" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 30 07:12:54 crc kubenswrapper[4520]: I0130 07:12:54.880350 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="75188bce-a930-40ea-839c-6f7ca3e71d70" containerName="registry-server" Jan 30 07:12:54 crc kubenswrapper[4520]: I0130 07:12:54.880917 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bswss" Jan 30 07:12:54 crc kubenswrapper[4520]: I0130 07:12:54.883385 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 07:12:54 crc kubenswrapper[4520]: I0130 07:12:54.883704 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 07:12:54 crc kubenswrapper[4520]: I0130 07:12:54.883844 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r7s58" Jan 30 07:12:54 crc kubenswrapper[4520]: I0130 07:12:54.884065 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 07:12:54 crc kubenswrapper[4520]: I0130 07:12:54.892590 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bswss"] Jan 30 07:12:55 crc kubenswrapper[4520]: I0130 07:12:55.057015 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43a470d7-27ae-41d0-97e6-dde24f62a7c2-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bswss\" (UID: \"43a470d7-27ae-41d0-97e6-dde24f62a7c2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bswss" Jan 30 07:12:55 crc kubenswrapper[4520]: I0130 07:12:55.057298 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43a470d7-27ae-41d0-97e6-dde24f62a7c2-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bswss\" (UID: \"43a470d7-27ae-41d0-97e6-dde24f62a7c2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bswss" Jan 30 07:12:55 crc kubenswrapper[4520]: I0130 07:12:55.057485 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh7rp\" (UniqueName: \"kubernetes.io/projected/43a470d7-27ae-41d0-97e6-dde24f62a7c2-kube-api-access-bh7rp\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bswss\" (UID: \"43a470d7-27ae-41d0-97e6-dde24f62a7c2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bswss" Jan 30 07:12:55 crc kubenswrapper[4520]: I0130 07:12:55.159085 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43a470d7-27ae-41d0-97e6-dde24f62a7c2-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bswss\" (UID: \"43a470d7-27ae-41d0-97e6-dde24f62a7c2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bswss" Jan 30 07:12:55 crc kubenswrapper[4520]: I0130 07:12:55.159172 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh7rp\" (UniqueName: \"kubernetes.io/projected/43a470d7-27ae-41d0-97e6-dde24f62a7c2-kube-api-access-bh7rp\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bswss\" (UID: \"43a470d7-27ae-41d0-97e6-dde24f62a7c2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bswss" Jan 30 07:12:55 crc kubenswrapper[4520]: I0130 07:12:55.159213 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43a470d7-27ae-41d0-97e6-dde24f62a7c2-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bswss\" (UID: \"43a470d7-27ae-41d0-97e6-dde24f62a7c2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bswss" Jan 30 07:12:55 crc kubenswrapper[4520]: I0130 07:12:55.162333 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43a470d7-27ae-41d0-97e6-dde24f62a7c2-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bswss\" (UID: \"43a470d7-27ae-41d0-97e6-dde24f62a7c2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bswss" Jan 30 07:12:55 crc kubenswrapper[4520]: I0130 07:12:55.163268 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43a470d7-27ae-41d0-97e6-dde24f62a7c2-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bswss\" (UID: \"43a470d7-27ae-41d0-97e6-dde24f62a7c2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bswss" Jan 30 07:12:55 crc kubenswrapper[4520]: I0130 07:12:55.176640 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh7rp\" (UniqueName: \"kubernetes.io/projected/43a470d7-27ae-41d0-97e6-dde24f62a7c2-kube-api-access-bh7rp\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bswss\" (UID: \"43a470d7-27ae-41d0-97e6-dde24f62a7c2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bswss" Jan 30 07:12:55 crc kubenswrapper[4520]: I0130 07:12:55.194670 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bswss" Jan 30 07:12:55 crc kubenswrapper[4520]: W0130 07:12:55.639316 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43a470d7_27ae_41d0_97e6_dde24f62a7c2.slice/crio-8b52a4606b6f457cf6bd2878c2c918566274face1adcc3a559b1f71a44acbc4e WatchSource:0}: Error finding container 8b52a4606b6f457cf6bd2878c2c918566274face1adcc3a559b1f71a44acbc4e: Status 404 returned error can't find the container with id 8b52a4606b6f457cf6bd2878c2c918566274face1adcc3a559b1f71a44acbc4e Jan 30 07:12:55 crc kubenswrapper[4520]: I0130 07:12:55.640393 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bswss"] Jan 30 07:12:55 crc kubenswrapper[4520]: I0130 07:12:55.805859 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bswss" event={"ID":"43a470d7-27ae-41d0-97e6-dde24f62a7c2","Type":"ContainerStarted","Data":"8b52a4606b6f457cf6bd2878c2c918566274face1adcc3a559b1f71a44acbc4e"} Jan 30 07:12:55 crc kubenswrapper[4520]: I0130 07:12:55.858868 4520 scope.go:117] "RemoveContainer" containerID="f536a2a6852db4fc7aa42b4b86986e5ac7f2c01a1c28b5abf5bebf08ece6bc32" Jan 30 07:12:56 crc kubenswrapper[4520]: I0130 07:12:56.813839 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bswss" event={"ID":"43a470d7-27ae-41d0-97e6-dde24f62a7c2","Type":"ContainerStarted","Data":"a7e60900c9e8689219dea9e7e3d5075dcddec261eeab108a9d5ae02b5f2aabbb"} Jan 30 07:12:57 crc kubenswrapper[4520]: I0130 07:12:57.793593 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 07:12:57 crc kubenswrapper[4520]: I0130 07:12:57.793964 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 07:12:57 crc kubenswrapper[4520]: I0130 07:12:57.794009 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" Jan 30 07:12:57 crc kubenswrapper[4520]: I0130 07:12:57.794955 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3511c403ecc0670dedcbeb455988f781d984e79ff36ca09f0a0274a95f203ca7"} pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 07:12:57 crc kubenswrapper[4520]: I0130 07:12:57.795011 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" containerID="cri-o://3511c403ecc0670dedcbeb455988f781d984e79ff36ca09f0a0274a95f203ca7" gracePeriod=600 Jan 30 07:12:57 crc kubenswrapper[4520]: E0130 07:12:57.921664 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:12:57 crc kubenswrapper[4520]: I0130 07:12:57.969999 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bswss" podStartSLOduration=3.411724643 podStartE2EDuration="3.969981136s" podCreationTimestamp="2026-01-30 07:12:54 +0000 UTC" firstStartedPulling="2026-01-30 07:12:55.641710412 +0000 UTC m=+1689.270062594" lastFinishedPulling="2026-01-30 07:12:56.199966906 +0000 UTC m=+1689.828319087" observedRunningTime="2026-01-30 07:12:56.830643801 +0000 UTC m=+1690.458995982" watchObservedRunningTime="2026-01-30 07:12:57.969981136 +0000 UTC m=+1691.598333316" Jan 30 07:12:57 crc kubenswrapper[4520]: I0130 07:12:57.978197 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r98nr"] Jan 30 07:12:57 crc kubenswrapper[4520]: I0130 07:12:57.980171 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r98nr" Jan 30 07:12:57 crc kubenswrapper[4520]: I0130 07:12:57.994096 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r98nr"] Jan 30 07:12:58 crc kubenswrapper[4520]: I0130 07:12:58.111463 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpnsq\" (UniqueName: \"kubernetes.io/projected/0612957c-3a76-4d67-b27e-cfeb33545952-kube-api-access-vpnsq\") pod \"redhat-operators-r98nr\" (UID: \"0612957c-3a76-4d67-b27e-cfeb33545952\") " pod="openshift-marketplace/redhat-operators-r98nr" Jan 30 07:12:58 crc kubenswrapper[4520]: I0130 07:12:58.111834 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0612957c-3a76-4d67-b27e-cfeb33545952-utilities\") pod \"redhat-operators-r98nr\" (UID: \"0612957c-3a76-4d67-b27e-cfeb33545952\") " pod="openshift-marketplace/redhat-operators-r98nr" Jan 30 07:12:58 crc kubenswrapper[4520]: I0130 07:12:58.111920 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0612957c-3a76-4d67-b27e-cfeb33545952-catalog-content\") pod \"redhat-operators-r98nr\" (UID: \"0612957c-3a76-4d67-b27e-cfeb33545952\") " pod="openshift-marketplace/redhat-operators-r98nr" Jan 30 07:12:58 crc kubenswrapper[4520]: I0130 07:12:58.214068 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0612957c-3a76-4d67-b27e-cfeb33545952-utilities\") pod \"redhat-operators-r98nr\" (UID: \"0612957c-3a76-4d67-b27e-cfeb33545952\") " pod="openshift-marketplace/redhat-operators-r98nr" Jan 30 07:12:58 crc kubenswrapper[4520]: I0130 07:12:58.214164 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0612957c-3a76-4d67-b27e-cfeb33545952-catalog-content\") pod \"redhat-operators-r98nr\" (UID: \"0612957c-3a76-4d67-b27e-cfeb33545952\") " pod="openshift-marketplace/redhat-operators-r98nr" Jan 30 07:12:58 crc kubenswrapper[4520]: I0130 07:12:58.214204 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpnsq\" (UniqueName: \"kubernetes.io/projected/0612957c-3a76-4d67-b27e-cfeb33545952-kube-api-access-vpnsq\") pod \"redhat-operators-r98nr\" (UID: \"0612957c-3a76-4d67-b27e-cfeb33545952\") " pod="openshift-marketplace/redhat-operators-r98nr" Jan 30 07:12:58 crc kubenswrapper[4520]: I0130 07:12:58.214774 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0612957c-3a76-4d67-b27e-cfeb33545952-catalog-content\") pod \"redhat-operators-r98nr\" (UID: \"0612957c-3a76-4d67-b27e-cfeb33545952\") " pod="openshift-marketplace/redhat-operators-r98nr" Jan 30 07:12:58 crc kubenswrapper[4520]: I0130 07:12:58.214777 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0612957c-3a76-4d67-b27e-cfeb33545952-utilities\") pod \"redhat-operators-r98nr\" (UID: \"0612957c-3a76-4d67-b27e-cfeb33545952\") " pod="openshift-marketplace/redhat-operators-r98nr" Jan 30 07:12:58 crc kubenswrapper[4520]: I0130 07:12:58.233899 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpnsq\" (UniqueName: \"kubernetes.io/projected/0612957c-3a76-4d67-b27e-cfeb33545952-kube-api-access-vpnsq\") pod \"redhat-operators-r98nr\" (UID: \"0612957c-3a76-4d67-b27e-cfeb33545952\") " pod="openshift-marketplace/redhat-operators-r98nr" Jan 30 07:12:58 crc kubenswrapper[4520]: I0130 07:12:58.306240 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r98nr" Jan 30 07:12:58 crc kubenswrapper[4520]: I0130 07:12:58.750894 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r98nr"] Jan 30 07:12:58 crc kubenswrapper[4520]: W0130 07:12:58.758544 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0612957c_3a76_4d67_b27e_cfeb33545952.slice/crio-2561a1371149de401a552a2dc5e9b4be03474df958455851988c5eabe704efa7 WatchSource:0}: Error finding container 2561a1371149de401a552a2dc5e9b4be03474df958455851988c5eabe704efa7: Status 404 returned error can't find the container with id 2561a1371149de401a552a2dc5e9b4be03474df958455851988c5eabe704efa7 Jan 30 07:12:58 crc kubenswrapper[4520]: I0130 07:12:58.832934 4520 generic.go:334] "Generic (PLEG): container finished" podID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerID="3511c403ecc0670dedcbeb455988f781d984e79ff36ca09f0a0274a95f203ca7" exitCode=0 Jan 30 07:12:58 crc kubenswrapper[4520]: I0130 07:12:58.832990 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerDied","Data":"3511c403ecc0670dedcbeb455988f781d984e79ff36ca09f0a0274a95f203ca7"} Jan 30 07:12:58 crc kubenswrapper[4520]: I0130 07:12:58.833022 4520 scope.go:117] "RemoveContainer" containerID="b00f4ab612613c1cf7c10de4b942ca02f5ce93773b4911ac63542d5a5740888c" Jan 30 07:12:58 crc kubenswrapper[4520]: I0130 07:12:58.833608 4520 scope.go:117] "RemoveContainer" containerID="3511c403ecc0670dedcbeb455988f781d984e79ff36ca09f0a0274a95f203ca7" Jan 30 07:12:58 crc kubenswrapper[4520]: E0130 07:12:58.833890 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:12:58 crc kubenswrapper[4520]: I0130 07:12:58.835380 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r98nr" event={"ID":"0612957c-3a76-4d67-b27e-cfeb33545952","Type":"ContainerStarted","Data":"2561a1371149de401a552a2dc5e9b4be03474df958455851988c5eabe704efa7"} Jan 30 07:12:59 crc kubenswrapper[4520]: I0130 07:12:59.845475 4520 generic.go:334] "Generic (PLEG): container finished" podID="0612957c-3a76-4d67-b27e-cfeb33545952" containerID="f6f3fa870a7c0db5fc7cd5d0ec7381672f09ea8b28a4c3823c58a4d11ebfbc19" exitCode=0 Jan 30 07:12:59 crc kubenswrapper[4520]: I0130 07:12:59.845561 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r98nr" event={"ID":"0612957c-3a76-4d67-b27e-cfeb33545952","Type":"ContainerDied","Data":"f6f3fa870a7c0db5fc7cd5d0ec7381672f09ea8b28a4c3823c58a4d11ebfbc19"} Jan 30 07:13:00 crc kubenswrapper[4520]: I0130 07:13:00.856228 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r98nr" event={"ID":"0612957c-3a76-4d67-b27e-cfeb33545952","Type":"ContainerStarted","Data":"17934846beee8aa0e9f7f29ab9c7e04ab31c614d912419a008a8b43f3e0bc404"} Jan 30 07:13:00 crc kubenswrapper[4520]: I0130 07:13:00.861064 4520 generic.go:334] "Generic (PLEG): container finished" podID="43a470d7-27ae-41d0-97e6-dde24f62a7c2" containerID="a7e60900c9e8689219dea9e7e3d5075dcddec261eeab108a9d5ae02b5f2aabbb" exitCode=0 Jan 30 07:13:00 crc kubenswrapper[4520]: I0130 07:13:00.861094 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bswss" event={"ID":"43a470d7-27ae-41d0-97e6-dde24f62a7c2","Type":"ContainerDied","Data":"a7e60900c9e8689219dea9e7e3d5075dcddec261eeab108a9d5ae02b5f2aabbb"} Jan 30 07:13:02 crc kubenswrapper[4520]: I0130 07:13:02.217158 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bswss" Jan 30 07:13:02 crc kubenswrapper[4520]: I0130 07:13:02.390900 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43a470d7-27ae-41d0-97e6-dde24f62a7c2-ssh-key-openstack-edpm-ipam\") pod \"43a470d7-27ae-41d0-97e6-dde24f62a7c2\" (UID: \"43a470d7-27ae-41d0-97e6-dde24f62a7c2\") " Jan 30 07:13:02 crc kubenswrapper[4520]: I0130 07:13:02.390955 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bh7rp\" (UniqueName: \"kubernetes.io/projected/43a470d7-27ae-41d0-97e6-dde24f62a7c2-kube-api-access-bh7rp\") pod \"43a470d7-27ae-41d0-97e6-dde24f62a7c2\" (UID: \"43a470d7-27ae-41d0-97e6-dde24f62a7c2\") " Jan 30 07:13:02 crc kubenswrapper[4520]: I0130 07:13:02.391028 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43a470d7-27ae-41d0-97e6-dde24f62a7c2-inventory\") pod \"43a470d7-27ae-41d0-97e6-dde24f62a7c2\" (UID: \"43a470d7-27ae-41d0-97e6-dde24f62a7c2\") " Jan 30 07:13:02 crc kubenswrapper[4520]: I0130 07:13:02.396727 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43a470d7-27ae-41d0-97e6-dde24f62a7c2-kube-api-access-bh7rp" (OuterVolumeSpecName: "kube-api-access-bh7rp") pod "43a470d7-27ae-41d0-97e6-dde24f62a7c2" (UID: "43a470d7-27ae-41d0-97e6-dde24f62a7c2"). InnerVolumeSpecName "kube-api-access-bh7rp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:13:02 crc kubenswrapper[4520]: I0130 07:13:02.416578 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43a470d7-27ae-41d0-97e6-dde24f62a7c2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "43a470d7-27ae-41d0-97e6-dde24f62a7c2" (UID: "43a470d7-27ae-41d0-97e6-dde24f62a7c2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:13:02 crc kubenswrapper[4520]: I0130 07:13:02.416700 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43a470d7-27ae-41d0-97e6-dde24f62a7c2-inventory" (OuterVolumeSpecName: "inventory") pod "43a470d7-27ae-41d0-97e6-dde24f62a7c2" (UID: "43a470d7-27ae-41d0-97e6-dde24f62a7c2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:13:02 crc kubenswrapper[4520]: I0130 07:13:02.493191 4520 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43a470d7-27ae-41d0-97e6-dde24f62a7c2-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 07:13:02 crc kubenswrapper[4520]: I0130 07:13:02.493229 4520 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43a470d7-27ae-41d0-97e6-dde24f62a7c2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 07:13:02 crc kubenswrapper[4520]: I0130 07:13:02.493243 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bh7rp\" (UniqueName: \"kubernetes.io/projected/43a470d7-27ae-41d0-97e6-dde24f62a7c2-kube-api-access-bh7rp\") on node \"crc\" DevicePath \"\"" Jan 30 07:13:02 crc kubenswrapper[4520]: I0130 07:13:02.883001 4520 generic.go:334] "Generic (PLEG): container finished" podID="0612957c-3a76-4d67-b27e-cfeb33545952" containerID="17934846beee8aa0e9f7f29ab9c7e04ab31c614d912419a008a8b43f3e0bc404" exitCode=0 Jan 30 07:13:02 crc kubenswrapper[4520]: I0130 07:13:02.883153 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r98nr" event={"ID":"0612957c-3a76-4d67-b27e-cfeb33545952","Type":"ContainerDied","Data":"17934846beee8aa0e9f7f29ab9c7e04ab31c614d912419a008a8b43f3e0bc404"} Jan 30 07:13:02 crc kubenswrapper[4520]: I0130 07:13:02.887795 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bswss" event={"ID":"43a470d7-27ae-41d0-97e6-dde24f62a7c2","Type":"ContainerDied","Data":"8b52a4606b6f457cf6bd2878c2c918566274face1adcc3a559b1f71a44acbc4e"} Jan 30 07:13:02 crc kubenswrapper[4520]: I0130 07:13:02.887825 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b52a4606b6f457cf6bd2878c2c918566274face1adcc3a559b1f71a44acbc4e" Jan 30 07:13:02 crc kubenswrapper[4520]: I0130 07:13:02.887838 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bswss" Jan 30 07:13:02 crc kubenswrapper[4520]: I0130 07:13:02.954357 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-t5tcd"] Jan 30 07:13:02 crc kubenswrapper[4520]: E0130 07:13:02.954961 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43a470d7-27ae-41d0-97e6-dde24f62a7c2" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 30 07:13:02 crc kubenswrapper[4520]: I0130 07:13:02.954980 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="43a470d7-27ae-41d0-97e6-dde24f62a7c2" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 30 07:13:02 crc kubenswrapper[4520]: I0130 07:13:02.955148 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="43a470d7-27ae-41d0-97e6-dde24f62a7c2" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 30 07:13:02 crc kubenswrapper[4520]: I0130 07:13:02.955695 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t5tcd" Jan 30 07:13:02 crc kubenswrapper[4520]: I0130 07:13:02.959132 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 07:13:02 crc kubenswrapper[4520]: I0130 07:13:02.959420 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 07:13:02 crc kubenswrapper[4520]: I0130 07:13:02.959579 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 07:13:02 crc kubenswrapper[4520]: I0130 07:13:02.959998 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r7s58" Jan 30 07:13:02 crc kubenswrapper[4520]: I0130 07:13:02.967661 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-t5tcd"] Jan 30 07:13:03 crc kubenswrapper[4520]: I0130 07:13:03.004365 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-798xc\" (UniqueName: \"kubernetes.io/projected/63fccac5-b2c5-4909-9a43-7e40ec403a6b-kube-api-access-798xc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t5tcd\" (UID: \"63fccac5-b2c5-4909-9a43-7e40ec403a6b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t5tcd" Jan 30 07:13:03 crc kubenswrapper[4520]: I0130 07:13:03.004625 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/63fccac5-b2c5-4909-9a43-7e40ec403a6b-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t5tcd\" (UID: \"63fccac5-b2c5-4909-9a43-7e40ec403a6b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t5tcd" Jan 30 07:13:03 crc kubenswrapper[4520]: I0130 07:13:03.004794 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63fccac5-b2c5-4909-9a43-7e40ec403a6b-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t5tcd\" (UID: \"63fccac5-b2c5-4909-9a43-7e40ec403a6b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t5tcd" Jan 30 07:13:03 crc kubenswrapper[4520]: I0130 07:13:03.106437 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63fccac5-b2c5-4909-9a43-7e40ec403a6b-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t5tcd\" (UID: \"63fccac5-b2c5-4909-9a43-7e40ec403a6b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t5tcd" Jan 30 07:13:03 crc kubenswrapper[4520]: I0130 07:13:03.106554 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-798xc\" (UniqueName: \"kubernetes.io/projected/63fccac5-b2c5-4909-9a43-7e40ec403a6b-kube-api-access-798xc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t5tcd\" (UID: \"63fccac5-b2c5-4909-9a43-7e40ec403a6b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t5tcd" Jan 30 07:13:03 crc kubenswrapper[4520]: I0130 07:13:03.106622 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/63fccac5-b2c5-4909-9a43-7e40ec403a6b-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t5tcd\" (UID: \"63fccac5-b2c5-4909-9a43-7e40ec403a6b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t5tcd" Jan 30 07:13:03 crc kubenswrapper[4520]: I0130 07:13:03.114061 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63fccac5-b2c5-4909-9a43-7e40ec403a6b-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t5tcd\" (UID: \"63fccac5-b2c5-4909-9a43-7e40ec403a6b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t5tcd" Jan 30 07:13:03 crc kubenswrapper[4520]: I0130 07:13:03.114660 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/63fccac5-b2c5-4909-9a43-7e40ec403a6b-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t5tcd\" (UID: \"63fccac5-b2c5-4909-9a43-7e40ec403a6b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t5tcd" Jan 30 07:13:03 crc kubenswrapper[4520]: I0130 07:13:03.124876 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-798xc\" (UniqueName: \"kubernetes.io/projected/63fccac5-b2c5-4909-9a43-7e40ec403a6b-kube-api-access-798xc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t5tcd\" (UID: \"63fccac5-b2c5-4909-9a43-7e40ec403a6b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t5tcd" Jan 30 07:13:03 crc kubenswrapper[4520]: I0130 07:13:03.280045 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t5tcd" Jan 30 07:13:03 crc kubenswrapper[4520]: I0130 07:13:03.898285 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-t5tcd"] Jan 30 07:13:03 crc kubenswrapper[4520]: I0130 07:13:03.899583 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r98nr" event={"ID":"0612957c-3a76-4d67-b27e-cfeb33545952","Type":"ContainerStarted","Data":"a53946aaad3c49dcd544a673f6225058420b76ede46a252ea835fe46688a0b08"} Jan 30 07:13:03 crc kubenswrapper[4520]: I0130 07:13:03.902097 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t5tcd" event={"ID":"63fccac5-b2c5-4909-9a43-7e40ec403a6b","Type":"ContainerStarted","Data":"fb66b6fb0f0ff4ee97a1ffefacd0f6d0f0b18be114636affbcde0c71fff43a45"} Jan 30 07:13:03 crc kubenswrapper[4520]: I0130 07:13:03.921481 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r98nr" podStartSLOduration=3.435262189 podStartE2EDuration="6.921464229s" podCreationTimestamp="2026-01-30 07:12:57 +0000 UTC" firstStartedPulling="2026-01-30 07:12:59.847653955 +0000 UTC m=+1693.476006136" lastFinishedPulling="2026-01-30 07:13:03.333855995 +0000 UTC m=+1696.962208176" observedRunningTime="2026-01-30 07:13:03.91478457 +0000 UTC m=+1697.543136750" watchObservedRunningTime="2026-01-30 07:13:03.921464229 +0000 UTC m=+1697.549816410" Jan 30 07:13:04 crc kubenswrapper[4520]: I0130 07:13:04.914268 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t5tcd" event={"ID":"63fccac5-b2c5-4909-9a43-7e40ec403a6b","Type":"ContainerStarted","Data":"1279edfd2995135ff21f9fab804f2f3327dda6bf06ed2596315831feb7cefb40"} Jan 30 07:13:04 crc kubenswrapper[4520]: I0130 07:13:04.940060 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t5tcd" podStartSLOduration=2.38357992 podStartE2EDuration="2.940044075s" podCreationTimestamp="2026-01-30 07:13:02 +0000 UTC" firstStartedPulling="2026-01-30 07:13:03.89681971 +0000 UTC m=+1697.525171890" lastFinishedPulling="2026-01-30 07:13:04.453283864 +0000 UTC m=+1698.081636045" observedRunningTime="2026-01-30 07:13:04.934730473 +0000 UTC m=+1698.563082654" watchObservedRunningTime="2026-01-30 07:13:04.940044075 +0000 UTC m=+1698.568396255" Jan 30 07:13:08 crc kubenswrapper[4520]: I0130 07:13:08.306638 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r98nr" Jan 30 07:13:08 crc kubenswrapper[4520]: I0130 07:13:08.307138 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-r98nr" Jan 30 07:13:09 crc kubenswrapper[4520]: I0130 07:13:09.397550 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r98nr" podUID="0612957c-3a76-4d67-b27e-cfeb33545952" containerName="registry-server" probeResult="failure" output=< Jan 30 07:13:09 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 07:13:09 crc kubenswrapper[4520]: > Jan 30 07:13:13 crc kubenswrapper[4520]: I0130 07:13:13.685349 4520 scope.go:117] "RemoveContainer" containerID="3511c403ecc0670dedcbeb455988f781d984e79ff36ca09f0a0274a95f203ca7" Jan 30 07:13:13 crc kubenswrapper[4520]: E0130 07:13:13.686351 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:13:18 crc kubenswrapper[4520]: I0130 07:13:18.349707 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r98nr" Jan 30 07:13:18 crc kubenswrapper[4520]: I0130 07:13:18.398143 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r98nr" Jan 30 07:13:18 crc kubenswrapper[4520]: I0130 07:13:18.591395 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r98nr"] Jan 30 07:13:19 crc kubenswrapper[4520]: I0130 07:13:19.034210 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pkvxs"] Jan 30 07:13:19 crc kubenswrapper[4520]: I0130 07:13:19.041262 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pkvxs"] Jan 30 07:13:20 crc kubenswrapper[4520]: I0130 07:13:20.041254 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r98nr" podUID="0612957c-3a76-4d67-b27e-cfeb33545952" containerName="registry-server" containerID="cri-o://a53946aaad3c49dcd544a673f6225058420b76ede46a252ea835fe46688a0b08" gracePeriod=2 Jan 30 07:13:20 crc kubenswrapper[4520]: I0130 07:13:20.462085 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r98nr" Jan 30 07:13:20 crc kubenswrapper[4520]: I0130 07:13:20.604781 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0612957c-3a76-4d67-b27e-cfeb33545952-utilities\") pod \"0612957c-3a76-4d67-b27e-cfeb33545952\" (UID: \"0612957c-3a76-4d67-b27e-cfeb33545952\") " Jan 30 07:13:20 crc kubenswrapper[4520]: I0130 07:13:20.604893 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpnsq\" (UniqueName: \"kubernetes.io/projected/0612957c-3a76-4d67-b27e-cfeb33545952-kube-api-access-vpnsq\") pod \"0612957c-3a76-4d67-b27e-cfeb33545952\" (UID: \"0612957c-3a76-4d67-b27e-cfeb33545952\") " Jan 30 07:13:20 crc kubenswrapper[4520]: I0130 07:13:20.604928 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0612957c-3a76-4d67-b27e-cfeb33545952-catalog-content\") pod \"0612957c-3a76-4d67-b27e-cfeb33545952\" (UID: \"0612957c-3a76-4d67-b27e-cfeb33545952\") " Jan 30 07:13:20 crc kubenswrapper[4520]: I0130 07:13:20.605397 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0612957c-3a76-4d67-b27e-cfeb33545952-utilities" (OuterVolumeSpecName: "utilities") pod "0612957c-3a76-4d67-b27e-cfeb33545952" (UID: "0612957c-3a76-4d67-b27e-cfeb33545952"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:13:20 crc kubenswrapper[4520]: I0130 07:13:20.606736 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0612957c-3a76-4d67-b27e-cfeb33545952-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 07:13:20 crc kubenswrapper[4520]: I0130 07:13:20.612100 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0612957c-3a76-4d67-b27e-cfeb33545952-kube-api-access-vpnsq" (OuterVolumeSpecName: "kube-api-access-vpnsq") pod "0612957c-3a76-4d67-b27e-cfeb33545952" (UID: "0612957c-3a76-4d67-b27e-cfeb33545952"). InnerVolumeSpecName "kube-api-access-vpnsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:13:20 crc kubenswrapper[4520]: I0130 07:13:20.683799 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0612957c-3a76-4d67-b27e-cfeb33545952-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0612957c-3a76-4d67-b27e-cfeb33545952" (UID: "0612957c-3a76-4d67-b27e-cfeb33545952"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:13:20 crc kubenswrapper[4520]: I0130 07:13:20.695968 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="336ca74f-9c49-494d-a5fd-4b67fa9dc2c7" path="/var/lib/kubelet/pods/336ca74f-9c49-494d-a5fd-4b67fa9dc2c7/volumes" Jan 30 07:13:20 crc kubenswrapper[4520]: I0130 07:13:20.709726 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpnsq\" (UniqueName: \"kubernetes.io/projected/0612957c-3a76-4d67-b27e-cfeb33545952-kube-api-access-vpnsq\") on node \"crc\" DevicePath \"\"" Jan 30 07:13:20 crc kubenswrapper[4520]: I0130 07:13:20.709753 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0612957c-3a76-4d67-b27e-cfeb33545952-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 07:13:21 crc kubenswrapper[4520]: I0130 07:13:21.053240 4520 generic.go:334] "Generic (PLEG): container finished" podID="0612957c-3a76-4d67-b27e-cfeb33545952" containerID="a53946aaad3c49dcd544a673f6225058420b76ede46a252ea835fe46688a0b08" exitCode=0 Jan 30 07:13:21 crc kubenswrapper[4520]: I0130 07:13:21.053335 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r98nr" Jan 30 07:13:21 crc kubenswrapper[4520]: I0130 07:13:21.053562 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r98nr" event={"ID":"0612957c-3a76-4d67-b27e-cfeb33545952","Type":"ContainerDied","Data":"a53946aaad3c49dcd544a673f6225058420b76ede46a252ea835fe46688a0b08"} Jan 30 07:13:21 crc kubenswrapper[4520]: I0130 07:13:21.053630 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r98nr" event={"ID":"0612957c-3a76-4d67-b27e-cfeb33545952","Type":"ContainerDied","Data":"2561a1371149de401a552a2dc5e9b4be03474df958455851988c5eabe704efa7"} Jan 30 07:13:21 crc kubenswrapper[4520]: I0130 07:13:21.053655 4520 scope.go:117] "RemoveContainer" containerID="a53946aaad3c49dcd544a673f6225058420b76ede46a252ea835fe46688a0b08" Jan 30 07:13:21 crc kubenswrapper[4520]: I0130 07:13:21.062685 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-dlt85"] Jan 30 07:13:21 crc kubenswrapper[4520]: I0130 07:13:21.069035 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-dlt85"] Jan 30 07:13:21 crc kubenswrapper[4520]: I0130 07:13:21.097051 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r98nr"] Jan 30 07:13:21 crc kubenswrapper[4520]: I0130 07:13:21.107977 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r98nr"] Jan 30 07:13:21 crc kubenswrapper[4520]: I0130 07:13:21.110656 4520 scope.go:117] "RemoveContainer" containerID="17934846beee8aa0e9f7f29ab9c7e04ab31c614d912419a008a8b43f3e0bc404" Jan 30 07:13:21 crc kubenswrapper[4520]: I0130 07:13:21.138767 4520 scope.go:117] "RemoveContainer" containerID="f6f3fa870a7c0db5fc7cd5d0ec7381672f09ea8b28a4c3823c58a4d11ebfbc19" Jan 30 07:13:21 crc kubenswrapper[4520]: I0130 07:13:21.190733 4520 scope.go:117] "RemoveContainer" containerID="a53946aaad3c49dcd544a673f6225058420b76ede46a252ea835fe46688a0b08" Jan 30 07:13:21 crc kubenswrapper[4520]: E0130 07:13:21.192900 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a53946aaad3c49dcd544a673f6225058420b76ede46a252ea835fe46688a0b08\": container with ID starting with a53946aaad3c49dcd544a673f6225058420b76ede46a252ea835fe46688a0b08 not found: ID does not exist" containerID="a53946aaad3c49dcd544a673f6225058420b76ede46a252ea835fe46688a0b08" Jan 30 07:13:21 crc kubenswrapper[4520]: I0130 07:13:21.192953 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a53946aaad3c49dcd544a673f6225058420b76ede46a252ea835fe46688a0b08"} err="failed to get container status \"a53946aaad3c49dcd544a673f6225058420b76ede46a252ea835fe46688a0b08\": rpc error: code = NotFound desc = could not find container \"a53946aaad3c49dcd544a673f6225058420b76ede46a252ea835fe46688a0b08\": container with ID starting with a53946aaad3c49dcd544a673f6225058420b76ede46a252ea835fe46688a0b08 not found: ID does not exist" Jan 30 07:13:21 crc kubenswrapper[4520]: I0130 07:13:21.192989 4520 scope.go:117] "RemoveContainer" containerID="17934846beee8aa0e9f7f29ab9c7e04ab31c614d912419a008a8b43f3e0bc404" Jan 30 07:13:21 crc kubenswrapper[4520]: E0130 07:13:21.193800 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17934846beee8aa0e9f7f29ab9c7e04ab31c614d912419a008a8b43f3e0bc404\": container with ID starting with 17934846beee8aa0e9f7f29ab9c7e04ab31c614d912419a008a8b43f3e0bc404 not found: ID does not exist" containerID="17934846beee8aa0e9f7f29ab9c7e04ab31c614d912419a008a8b43f3e0bc404" Jan 30 07:13:21 crc kubenswrapper[4520]: I0130 07:13:21.193834 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17934846beee8aa0e9f7f29ab9c7e04ab31c614d912419a008a8b43f3e0bc404"} err="failed to get container status \"17934846beee8aa0e9f7f29ab9c7e04ab31c614d912419a008a8b43f3e0bc404\": rpc error: code = NotFound desc = could not find container \"17934846beee8aa0e9f7f29ab9c7e04ab31c614d912419a008a8b43f3e0bc404\": container with ID starting with 17934846beee8aa0e9f7f29ab9c7e04ab31c614d912419a008a8b43f3e0bc404 not found: ID does not exist" Jan 30 07:13:21 crc kubenswrapper[4520]: I0130 07:13:21.193857 4520 scope.go:117] "RemoveContainer" containerID="f6f3fa870a7c0db5fc7cd5d0ec7381672f09ea8b28a4c3823c58a4d11ebfbc19" Jan 30 07:13:21 crc kubenswrapper[4520]: E0130 07:13:21.195662 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6f3fa870a7c0db5fc7cd5d0ec7381672f09ea8b28a4c3823c58a4d11ebfbc19\": container with ID starting with f6f3fa870a7c0db5fc7cd5d0ec7381672f09ea8b28a4c3823c58a4d11ebfbc19 not found: ID does not exist" containerID="f6f3fa870a7c0db5fc7cd5d0ec7381672f09ea8b28a4c3823c58a4d11ebfbc19" Jan 30 07:13:21 crc kubenswrapper[4520]: I0130 07:13:21.195693 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6f3fa870a7c0db5fc7cd5d0ec7381672f09ea8b28a4c3823c58a4d11ebfbc19"} err="failed to get container status \"f6f3fa870a7c0db5fc7cd5d0ec7381672f09ea8b28a4c3823c58a4d11ebfbc19\": rpc error: code = NotFound desc = could not find container \"f6f3fa870a7c0db5fc7cd5d0ec7381672f09ea8b28a4c3823c58a4d11ebfbc19\": container with ID starting with f6f3fa870a7c0db5fc7cd5d0ec7381672f09ea8b28a4c3823c58a4d11ebfbc19 not found: ID does not exist" Jan 30 07:13:22 crc kubenswrapper[4520]: I0130 07:13:22.693892 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0612957c-3a76-4d67-b27e-cfeb33545952" path="/var/lib/kubelet/pods/0612957c-3a76-4d67-b27e-cfeb33545952/volumes" Jan 30 07:13:22 crc kubenswrapper[4520]: I0130 07:13:22.695891 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13d598af-4041-4d4e-8594-56d19d1225f5" path="/var/lib/kubelet/pods/13d598af-4041-4d4e-8594-56d19d1225f5/volumes" Jan 30 07:13:22 crc kubenswrapper[4520]: I0130 07:13:22.788059 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-65pf6"] Jan 30 07:13:22 crc kubenswrapper[4520]: E0130 07:13:22.788578 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0612957c-3a76-4d67-b27e-cfeb33545952" containerName="extract-content" Jan 30 07:13:22 crc kubenswrapper[4520]: I0130 07:13:22.788594 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="0612957c-3a76-4d67-b27e-cfeb33545952" containerName="extract-content" Jan 30 07:13:22 crc kubenswrapper[4520]: E0130 07:13:22.788615 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0612957c-3a76-4d67-b27e-cfeb33545952" containerName="extract-utilities" Jan 30 07:13:22 crc kubenswrapper[4520]: I0130 07:13:22.788620 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="0612957c-3a76-4d67-b27e-cfeb33545952" containerName="extract-utilities" Jan 30 07:13:22 crc kubenswrapper[4520]: E0130 07:13:22.788650 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0612957c-3a76-4d67-b27e-cfeb33545952" containerName="registry-server" Jan 30 07:13:22 crc kubenswrapper[4520]: I0130 07:13:22.788659 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="0612957c-3a76-4d67-b27e-cfeb33545952" containerName="registry-server" Jan 30 07:13:22 crc kubenswrapper[4520]: I0130 07:13:22.788852 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="0612957c-3a76-4d67-b27e-cfeb33545952" containerName="registry-server" Jan 30 07:13:22 crc kubenswrapper[4520]: I0130 07:13:22.790287 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65pf6" Jan 30 07:13:22 crc kubenswrapper[4520]: I0130 07:13:22.807422 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-65pf6"] Jan 30 07:13:22 crc kubenswrapper[4520]: I0130 07:13:22.957307 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bac2055a-1086-46f8-af65-d10512490664-catalog-content\") pod \"certified-operators-65pf6\" (UID: \"bac2055a-1086-46f8-af65-d10512490664\") " pod="openshift-marketplace/certified-operators-65pf6" Jan 30 07:13:22 crc kubenswrapper[4520]: I0130 07:13:22.957735 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm6sw\" (UniqueName: \"kubernetes.io/projected/bac2055a-1086-46f8-af65-d10512490664-kube-api-access-wm6sw\") pod \"certified-operators-65pf6\" (UID: \"bac2055a-1086-46f8-af65-d10512490664\") " pod="openshift-marketplace/certified-operators-65pf6" Jan 30 07:13:22 crc kubenswrapper[4520]: I0130 07:13:22.957877 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bac2055a-1086-46f8-af65-d10512490664-utilities\") pod \"certified-operators-65pf6\" (UID: \"bac2055a-1086-46f8-af65-d10512490664\") " pod="openshift-marketplace/certified-operators-65pf6" Jan 30 07:13:23 crc kubenswrapper[4520]: I0130 07:13:23.060699 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bac2055a-1086-46f8-af65-d10512490664-utilities\") pod \"certified-operators-65pf6\" (UID: \"bac2055a-1086-46f8-af65-d10512490664\") " pod="openshift-marketplace/certified-operators-65pf6" Jan 30 07:13:23 crc kubenswrapper[4520]: I0130 07:13:23.060839 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bac2055a-1086-46f8-af65-d10512490664-catalog-content\") pod \"certified-operators-65pf6\" (UID: \"bac2055a-1086-46f8-af65-d10512490664\") " pod="openshift-marketplace/certified-operators-65pf6" Jan 30 07:13:23 crc kubenswrapper[4520]: I0130 07:13:23.060904 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm6sw\" (UniqueName: \"kubernetes.io/projected/bac2055a-1086-46f8-af65-d10512490664-kube-api-access-wm6sw\") pod \"certified-operators-65pf6\" (UID: \"bac2055a-1086-46f8-af65-d10512490664\") " pod="openshift-marketplace/certified-operators-65pf6" Jan 30 07:13:23 crc kubenswrapper[4520]: I0130 07:13:23.061182 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bac2055a-1086-46f8-af65-d10512490664-utilities\") pod \"certified-operators-65pf6\" (UID: \"bac2055a-1086-46f8-af65-d10512490664\") " pod="openshift-marketplace/certified-operators-65pf6" Jan 30 07:13:23 crc kubenswrapper[4520]: I0130 07:13:23.061256 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bac2055a-1086-46f8-af65-d10512490664-catalog-content\") pod \"certified-operators-65pf6\" (UID: \"bac2055a-1086-46f8-af65-d10512490664\") " pod="openshift-marketplace/certified-operators-65pf6" Jan 30 07:13:23 crc kubenswrapper[4520]: I0130 07:13:23.082117 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm6sw\" (UniqueName: \"kubernetes.io/projected/bac2055a-1086-46f8-af65-d10512490664-kube-api-access-wm6sw\") pod \"certified-operators-65pf6\" (UID: \"bac2055a-1086-46f8-af65-d10512490664\") " pod="openshift-marketplace/certified-operators-65pf6" Jan 30 07:13:23 crc kubenswrapper[4520]: I0130 07:13:23.106387 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65pf6" Jan 30 07:13:23 crc kubenswrapper[4520]: I0130 07:13:23.514923 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-65pf6"] Jan 30 07:13:23 crc kubenswrapper[4520]: W0130 07:13:23.522957 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbac2055a_1086_46f8_af65_d10512490664.slice/crio-367d300a7fe895c0922307169afd977731c1abbef0c4d29d63a2e5ce8f43ed42 WatchSource:0}: Error finding container 367d300a7fe895c0922307169afd977731c1abbef0c4d29d63a2e5ce8f43ed42: Status 404 returned error can't find the container with id 367d300a7fe895c0922307169afd977731c1abbef0c4d29d63a2e5ce8f43ed42 Jan 30 07:13:24 crc kubenswrapper[4520]: I0130 07:13:24.076103 4520 generic.go:334] "Generic (PLEG): container finished" podID="bac2055a-1086-46f8-af65-d10512490664" containerID="833cd84f788b5f2798d1c902a2088de027dd6b5bd50ffa310b8b5e5874a4024f" exitCode=0 Jan 30 07:13:24 crc kubenswrapper[4520]: I0130 07:13:24.076151 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65pf6" event={"ID":"bac2055a-1086-46f8-af65-d10512490664","Type":"ContainerDied","Data":"833cd84f788b5f2798d1c902a2088de027dd6b5bd50ffa310b8b5e5874a4024f"} Jan 30 07:13:24 crc kubenswrapper[4520]: I0130 07:13:24.076368 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65pf6" event={"ID":"bac2055a-1086-46f8-af65-d10512490664","Type":"ContainerStarted","Data":"367d300a7fe895c0922307169afd977731c1abbef0c4d29d63a2e5ce8f43ed42"} Jan 30 07:13:24 crc kubenswrapper[4520]: I0130 07:13:24.686120 4520 scope.go:117] "RemoveContainer" containerID="3511c403ecc0670dedcbeb455988f781d984e79ff36ca09f0a0274a95f203ca7" Jan 30 07:13:24 crc kubenswrapper[4520]: E0130 07:13:24.686672 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:13:25 crc kubenswrapper[4520]: I0130 07:13:25.086123 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65pf6" event={"ID":"bac2055a-1086-46f8-af65-d10512490664","Type":"ContainerStarted","Data":"c6a01d2819ce7a3514b556b023ee8abda02c034ad80555231f74cd843ca78ee7"} Jan 30 07:13:26 crc kubenswrapper[4520]: I0130 07:13:26.095030 4520 generic.go:334] "Generic (PLEG): container finished" podID="bac2055a-1086-46f8-af65-d10512490664" containerID="c6a01d2819ce7a3514b556b023ee8abda02c034ad80555231f74cd843ca78ee7" exitCode=0 Jan 30 07:13:26 crc kubenswrapper[4520]: I0130 07:13:26.095085 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65pf6" event={"ID":"bac2055a-1086-46f8-af65-d10512490664","Type":"ContainerDied","Data":"c6a01d2819ce7a3514b556b023ee8abda02c034ad80555231f74cd843ca78ee7"} Jan 30 07:13:27 crc kubenswrapper[4520]: I0130 07:13:27.103461 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65pf6" event={"ID":"bac2055a-1086-46f8-af65-d10512490664","Type":"ContainerStarted","Data":"c5c8b541386863fafd1f972f35030969cd13860bda527f9e4d623cf2cac40ea7"} Jan 30 07:13:33 crc kubenswrapper[4520]: I0130 07:13:33.106940 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-65pf6" Jan 30 07:13:33 crc kubenswrapper[4520]: I0130 07:13:33.107573 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-65pf6" Jan 30 07:13:33 crc kubenswrapper[4520]: I0130 07:13:33.151460 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-65pf6" Jan 30 07:13:33 crc kubenswrapper[4520]: I0130 07:13:33.179940 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-65pf6" podStartSLOduration=8.613176113 podStartE2EDuration="11.179916478s" podCreationTimestamp="2026-01-30 07:13:22 +0000 UTC" firstStartedPulling="2026-01-30 07:13:24.07779389 +0000 UTC m=+1717.706146072" lastFinishedPulling="2026-01-30 07:13:26.644534256 +0000 UTC m=+1720.272886437" observedRunningTime="2026-01-30 07:13:27.124144813 +0000 UTC m=+1720.752496994" watchObservedRunningTime="2026-01-30 07:13:33.179916478 +0000 UTC m=+1726.808268660" Jan 30 07:13:33 crc kubenswrapper[4520]: I0130 07:13:33.193705 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-65pf6" Jan 30 07:13:33 crc kubenswrapper[4520]: I0130 07:13:33.390211 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-65pf6"] Jan 30 07:13:34 crc kubenswrapper[4520]: I0130 07:13:34.155626 4520 generic.go:334] "Generic (PLEG): container finished" podID="63fccac5-b2c5-4909-9a43-7e40ec403a6b" containerID="1279edfd2995135ff21f9fab804f2f3327dda6bf06ed2596315831feb7cefb40" exitCode=0 Jan 30 07:13:34 crc kubenswrapper[4520]: I0130 07:13:34.155710 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t5tcd" event={"ID":"63fccac5-b2c5-4909-9a43-7e40ec403a6b","Type":"ContainerDied","Data":"1279edfd2995135ff21f9fab804f2f3327dda6bf06ed2596315831feb7cefb40"} Jan 30 07:13:35 crc kubenswrapper[4520]: I0130 07:13:35.163927 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-65pf6" podUID="bac2055a-1086-46f8-af65-d10512490664" containerName="registry-server" containerID="cri-o://c5c8b541386863fafd1f972f35030969cd13860bda527f9e4d623cf2cac40ea7" gracePeriod=2 Jan 30 07:13:35 crc kubenswrapper[4520]: I0130 07:13:35.543500 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t5tcd" Jan 30 07:13:35 crc kubenswrapper[4520]: I0130 07:13:35.549310 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65pf6" Jan 30 07:13:35 crc kubenswrapper[4520]: I0130 07:13:35.623884 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/63fccac5-b2c5-4909-9a43-7e40ec403a6b-ssh-key-openstack-edpm-ipam\") pod \"63fccac5-b2c5-4909-9a43-7e40ec403a6b\" (UID: \"63fccac5-b2c5-4909-9a43-7e40ec403a6b\") " Jan 30 07:13:35 crc kubenswrapper[4520]: I0130 07:13:35.623956 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bac2055a-1086-46f8-af65-d10512490664-utilities\") pod \"bac2055a-1086-46f8-af65-d10512490664\" (UID: \"bac2055a-1086-46f8-af65-d10512490664\") " Jan 30 07:13:35 crc kubenswrapper[4520]: I0130 07:13:35.624014 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm6sw\" (UniqueName: \"kubernetes.io/projected/bac2055a-1086-46f8-af65-d10512490664-kube-api-access-wm6sw\") pod \"bac2055a-1086-46f8-af65-d10512490664\" (UID: \"bac2055a-1086-46f8-af65-d10512490664\") " Jan 30 07:13:35 crc kubenswrapper[4520]: I0130 07:13:35.624081 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-798xc\" (UniqueName: \"kubernetes.io/projected/63fccac5-b2c5-4909-9a43-7e40ec403a6b-kube-api-access-798xc\") pod \"63fccac5-b2c5-4909-9a43-7e40ec403a6b\" (UID: \"63fccac5-b2c5-4909-9a43-7e40ec403a6b\") " Jan 30 07:13:35 crc kubenswrapper[4520]: I0130 07:13:35.624424 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bac2055a-1086-46f8-af65-d10512490664-catalog-content\") pod \"bac2055a-1086-46f8-af65-d10512490664\" (UID: \"bac2055a-1086-46f8-af65-d10512490664\") " Jan 30 07:13:35 crc kubenswrapper[4520]: I0130 07:13:35.624570 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63fccac5-b2c5-4909-9a43-7e40ec403a6b-inventory\") pod \"63fccac5-b2c5-4909-9a43-7e40ec403a6b\" (UID: \"63fccac5-b2c5-4909-9a43-7e40ec403a6b\") " Jan 30 07:13:35 crc kubenswrapper[4520]: I0130 07:13:35.625602 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bac2055a-1086-46f8-af65-d10512490664-utilities" (OuterVolumeSpecName: "utilities") pod "bac2055a-1086-46f8-af65-d10512490664" (UID: "bac2055a-1086-46f8-af65-d10512490664"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:13:35 crc kubenswrapper[4520]: I0130 07:13:35.630447 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bac2055a-1086-46f8-af65-d10512490664-kube-api-access-wm6sw" (OuterVolumeSpecName: "kube-api-access-wm6sw") pod "bac2055a-1086-46f8-af65-d10512490664" (UID: "bac2055a-1086-46f8-af65-d10512490664"). InnerVolumeSpecName "kube-api-access-wm6sw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:13:35 crc kubenswrapper[4520]: I0130 07:13:35.643157 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63fccac5-b2c5-4909-9a43-7e40ec403a6b-kube-api-access-798xc" (OuterVolumeSpecName: "kube-api-access-798xc") pod "63fccac5-b2c5-4909-9a43-7e40ec403a6b" (UID: "63fccac5-b2c5-4909-9a43-7e40ec403a6b"). InnerVolumeSpecName "kube-api-access-798xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:13:35 crc kubenswrapper[4520]: I0130 07:13:35.646368 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63fccac5-b2c5-4909-9a43-7e40ec403a6b-inventory" (OuterVolumeSpecName: "inventory") pod "63fccac5-b2c5-4909-9a43-7e40ec403a6b" (UID: "63fccac5-b2c5-4909-9a43-7e40ec403a6b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:13:35 crc kubenswrapper[4520]: I0130 07:13:35.649279 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63fccac5-b2c5-4909-9a43-7e40ec403a6b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "63fccac5-b2c5-4909-9a43-7e40ec403a6b" (UID: "63fccac5-b2c5-4909-9a43-7e40ec403a6b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:13:35 crc kubenswrapper[4520]: I0130 07:13:35.671029 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bac2055a-1086-46f8-af65-d10512490664-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bac2055a-1086-46f8-af65-d10512490664" (UID: "bac2055a-1086-46f8-af65-d10512490664"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:13:35 crc kubenswrapper[4520]: I0130 07:13:35.726168 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bac2055a-1086-46f8-af65-d10512490664-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 07:13:35 crc kubenswrapper[4520]: I0130 07:13:35.726204 4520 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63fccac5-b2c5-4909-9a43-7e40ec403a6b-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 07:13:35 crc kubenswrapper[4520]: I0130 07:13:35.726216 4520 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/63fccac5-b2c5-4909-9a43-7e40ec403a6b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 07:13:35 crc kubenswrapper[4520]: I0130 07:13:35.726231 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bac2055a-1086-46f8-af65-d10512490664-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 07:13:35 crc kubenswrapper[4520]: I0130 07:13:35.726267 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm6sw\" (UniqueName: \"kubernetes.io/projected/bac2055a-1086-46f8-af65-d10512490664-kube-api-access-wm6sw\") on node \"crc\" DevicePath \"\"" Jan 30 07:13:35 crc kubenswrapper[4520]: I0130 07:13:35.726279 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-798xc\" (UniqueName: \"kubernetes.io/projected/63fccac5-b2c5-4909-9a43-7e40ec403a6b-kube-api-access-798xc\") on node \"crc\" DevicePath \"\"" Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.170914 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t5tcd" event={"ID":"63fccac5-b2c5-4909-9a43-7e40ec403a6b","Type":"ContainerDied","Data":"fb66b6fb0f0ff4ee97a1ffefacd0f6d0f0b18be114636affbcde0c71fff43a45"} Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.170956 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb66b6fb0f0ff4ee97a1ffefacd0f6d0f0b18be114636affbcde0c71fff43a45" Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.170951 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t5tcd" Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.173653 4520 generic.go:334] "Generic (PLEG): container finished" podID="bac2055a-1086-46f8-af65-d10512490664" containerID="c5c8b541386863fafd1f972f35030969cd13860bda527f9e4d623cf2cac40ea7" exitCode=0 Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.173684 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65pf6" Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.173719 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65pf6" event={"ID":"bac2055a-1086-46f8-af65-d10512490664","Type":"ContainerDied","Data":"c5c8b541386863fafd1f972f35030969cd13860bda527f9e4d623cf2cac40ea7"} Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.173784 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65pf6" event={"ID":"bac2055a-1086-46f8-af65-d10512490664","Type":"ContainerDied","Data":"367d300a7fe895c0922307169afd977731c1abbef0c4d29d63a2e5ce8f43ed42"} Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.173809 4520 scope.go:117] "RemoveContainer" containerID="c5c8b541386863fafd1f972f35030969cd13860bda527f9e4d623cf2cac40ea7" Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.225805 4520 scope.go:117] "RemoveContainer" containerID="c6a01d2819ce7a3514b556b023ee8abda02c034ad80555231f74cd843ca78ee7" Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.226679 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-65pf6"] Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.232413 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-65pf6"] Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.253931 4520 scope.go:117] "RemoveContainer" containerID="833cd84f788b5f2798d1c902a2088de027dd6b5bd50ffa310b8b5e5874a4024f" Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.257035 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzd6c"] Jan 30 07:13:36 crc kubenswrapper[4520]: E0130 07:13:36.257321 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63fccac5-b2c5-4909-9a43-7e40ec403a6b" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.257336 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="63fccac5-b2c5-4909-9a43-7e40ec403a6b" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 30 07:13:36 crc kubenswrapper[4520]: E0130 07:13:36.257349 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bac2055a-1086-46f8-af65-d10512490664" containerName="registry-server" Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.257355 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="bac2055a-1086-46f8-af65-d10512490664" containerName="registry-server" Jan 30 07:13:36 crc kubenswrapper[4520]: E0130 07:13:36.257368 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bac2055a-1086-46f8-af65-d10512490664" containerName="extract-utilities" Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.257373 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="bac2055a-1086-46f8-af65-d10512490664" containerName="extract-utilities" Jan 30 07:13:36 crc kubenswrapper[4520]: E0130 07:13:36.257387 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bac2055a-1086-46f8-af65-d10512490664" containerName="extract-content" Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.257391 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="bac2055a-1086-46f8-af65-d10512490664" containerName="extract-content" Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.257543 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="63fccac5-b2c5-4909-9a43-7e40ec403a6b" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.257566 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="bac2055a-1086-46f8-af65-d10512490664" containerName="registry-server" Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.258044 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzd6c" Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.261496 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r7s58" Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.261609 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.261767 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.264136 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.267468 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzd6c"] Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.282783 4520 scope.go:117] "RemoveContainer" containerID="c5c8b541386863fafd1f972f35030969cd13860bda527f9e4d623cf2cac40ea7" Jan 30 07:13:36 crc kubenswrapper[4520]: E0130 07:13:36.287812 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5c8b541386863fafd1f972f35030969cd13860bda527f9e4d623cf2cac40ea7\": container with ID starting with c5c8b541386863fafd1f972f35030969cd13860bda527f9e4d623cf2cac40ea7 not found: ID does not exist" containerID="c5c8b541386863fafd1f972f35030969cd13860bda527f9e4d623cf2cac40ea7" Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.287876 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5c8b541386863fafd1f972f35030969cd13860bda527f9e4d623cf2cac40ea7"} err="failed to get container status \"c5c8b541386863fafd1f972f35030969cd13860bda527f9e4d623cf2cac40ea7\": rpc error: code = NotFound desc = could not find container \"c5c8b541386863fafd1f972f35030969cd13860bda527f9e4d623cf2cac40ea7\": container with ID starting with c5c8b541386863fafd1f972f35030969cd13860bda527f9e4d623cf2cac40ea7 not found: ID does not exist" Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.287913 4520 scope.go:117] "RemoveContainer" containerID="c6a01d2819ce7a3514b556b023ee8abda02c034ad80555231f74cd843ca78ee7" Jan 30 07:13:36 crc kubenswrapper[4520]: E0130 07:13:36.290827 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6a01d2819ce7a3514b556b023ee8abda02c034ad80555231f74cd843ca78ee7\": container with ID starting with c6a01d2819ce7a3514b556b023ee8abda02c034ad80555231f74cd843ca78ee7 not found: ID does not exist" containerID="c6a01d2819ce7a3514b556b023ee8abda02c034ad80555231f74cd843ca78ee7" Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.290861 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6a01d2819ce7a3514b556b023ee8abda02c034ad80555231f74cd843ca78ee7"} err="failed to get container status \"c6a01d2819ce7a3514b556b023ee8abda02c034ad80555231f74cd843ca78ee7\": rpc error: code = NotFound desc = could not find container \"c6a01d2819ce7a3514b556b023ee8abda02c034ad80555231f74cd843ca78ee7\": container with ID starting with c6a01d2819ce7a3514b556b023ee8abda02c034ad80555231f74cd843ca78ee7 not found: ID does not exist" Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.290888 4520 scope.go:117] "RemoveContainer" containerID="833cd84f788b5f2798d1c902a2088de027dd6b5bd50ffa310b8b5e5874a4024f" Jan 30 07:13:36 crc kubenswrapper[4520]: E0130 07:13:36.291143 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"833cd84f788b5f2798d1c902a2088de027dd6b5bd50ffa310b8b5e5874a4024f\": container with ID starting with 833cd84f788b5f2798d1c902a2088de027dd6b5bd50ffa310b8b5e5874a4024f not found: ID does not exist" containerID="833cd84f788b5f2798d1c902a2088de027dd6b5bd50ffa310b8b5e5874a4024f" Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.291166 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"833cd84f788b5f2798d1c902a2088de027dd6b5bd50ffa310b8b5e5874a4024f"} err="failed to get container status \"833cd84f788b5f2798d1c902a2088de027dd6b5bd50ffa310b8b5e5874a4024f\": rpc error: code = NotFound desc = could not find container \"833cd84f788b5f2798d1c902a2088de027dd6b5bd50ffa310b8b5e5874a4024f\": container with ID starting with 833cd84f788b5f2798d1c902a2088de027dd6b5bd50ffa310b8b5e5874a4024f not found: ID does not exist" Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.334793 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvgn4\" (UniqueName: \"kubernetes.io/projected/6fc50153-07b9-4447-8e53-e808738a424e-kube-api-access-rvgn4\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pzd6c\" (UID: \"6fc50153-07b9-4447-8e53-e808738a424e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzd6c" Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.334871 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fc50153-07b9-4447-8e53-e808738a424e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pzd6c\" (UID: \"6fc50153-07b9-4447-8e53-e808738a424e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzd6c" Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.334925 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fc50153-07b9-4447-8e53-e808738a424e-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pzd6c\" (UID: \"6fc50153-07b9-4447-8e53-e808738a424e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzd6c" Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.435832 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fc50153-07b9-4447-8e53-e808738a424e-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pzd6c\" (UID: \"6fc50153-07b9-4447-8e53-e808738a424e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzd6c" Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.436068 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvgn4\" (UniqueName: \"kubernetes.io/projected/6fc50153-07b9-4447-8e53-e808738a424e-kube-api-access-rvgn4\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pzd6c\" (UID: \"6fc50153-07b9-4447-8e53-e808738a424e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzd6c" Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.436208 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fc50153-07b9-4447-8e53-e808738a424e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pzd6c\" (UID: \"6fc50153-07b9-4447-8e53-e808738a424e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzd6c" Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.440729 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fc50153-07b9-4447-8e53-e808738a424e-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pzd6c\" (UID: \"6fc50153-07b9-4447-8e53-e808738a424e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzd6c" Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.441683 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fc50153-07b9-4447-8e53-e808738a424e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pzd6c\" (UID: \"6fc50153-07b9-4447-8e53-e808738a424e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzd6c" Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.452181 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvgn4\" (UniqueName: \"kubernetes.io/projected/6fc50153-07b9-4447-8e53-e808738a424e-kube-api-access-rvgn4\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pzd6c\" (UID: \"6fc50153-07b9-4447-8e53-e808738a424e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzd6c" Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.604192 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzd6c" Jan 30 07:13:36 crc kubenswrapper[4520]: I0130 07:13:36.704978 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bac2055a-1086-46f8-af65-d10512490664" path="/var/lib/kubelet/pods/bac2055a-1086-46f8-af65-d10512490664/volumes" Jan 30 07:13:37 crc kubenswrapper[4520]: I0130 07:13:37.088957 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzd6c"] Jan 30 07:13:37 crc kubenswrapper[4520]: I0130 07:13:37.185484 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzd6c" event={"ID":"6fc50153-07b9-4447-8e53-e808738a424e","Type":"ContainerStarted","Data":"c7ea5795a20b14af5291e443510d7572ef1565087f7526da8257497d53601923"} Jan 30 07:13:38 crc kubenswrapper[4520]: I0130 07:13:38.193899 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzd6c" event={"ID":"6fc50153-07b9-4447-8e53-e808738a424e","Type":"ContainerStarted","Data":"29a2482b6f411027b2180bac39d71a098fbb43f5617c8c9ab338b686705e554f"} Jan 30 07:13:38 crc kubenswrapper[4520]: I0130 07:13:38.215539 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzd6c" podStartSLOduration=1.677249698 podStartE2EDuration="2.215510554s" podCreationTimestamp="2026-01-30 07:13:36 +0000 UTC" firstStartedPulling="2026-01-30 07:13:37.100594058 +0000 UTC m=+1730.728946239" lastFinishedPulling="2026-01-30 07:13:37.638854915 +0000 UTC m=+1731.267207095" observedRunningTime="2026-01-30 07:13:38.207958945 +0000 UTC m=+1731.836311126" watchObservedRunningTime="2026-01-30 07:13:38.215510554 +0000 UTC m=+1731.843862735" Jan 30 07:13:38 crc kubenswrapper[4520]: I0130 07:13:38.685872 4520 scope.go:117] "RemoveContainer" containerID="3511c403ecc0670dedcbeb455988f781d984e79ff36ca09f0a0274a95f203ca7" Jan 30 07:13:38 crc kubenswrapper[4520]: E0130 07:13:38.686459 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:13:50 crc kubenswrapper[4520]: I0130 07:13:50.686804 4520 scope.go:117] "RemoveContainer" containerID="3511c403ecc0670dedcbeb455988f781d984e79ff36ca09f0a0274a95f203ca7" Jan 30 07:13:50 crc kubenswrapper[4520]: E0130 07:13:50.687878 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:13:55 crc kubenswrapper[4520]: I0130 07:13:55.901019 4520 scope.go:117] "RemoveContainer" containerID="c1e157615aef27a6d2496569f4094023ce4b356a17064d3dafd925d951d2bebc" Jan 30 07:13:55 crc kubenswrapper[4520]: I0130 07:13:55.933896 4520 scope.go:117] "RemoveContainer" containerID="c3cf98f8b16c1d7fee08a59d751e517faf1688c94105516cae5d5be7dfa204b4" Jan 30 07:14:05 crc kubenswrapper[4520]: I0130 07:14:05.685545 4520 scope.go:117] "RemoveContainer" containerID="3511c403ecc0670dedcbeb455988f781d984e79ff36ca09f0a0274a95f203ca7" Jan 30 07:14:05 crc kubenswrapper[4520]: E0130 07:14:05.686303 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:14:06 crc kubenswrapper[4520]: I0130 07:14:06.027071 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-m2hw8"] Jan 30 07:14:06 crc kubenswrapper[4520]: I0130 07:14:06.033533 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-m2hw8"] Jan 30 07:14:06 crc kubenswrapper[4520]: I0130 07:14:06.697608 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed" path="/var/lib/kubelet/pods/f696c12c-c9d1-4fbd-a0f7-01a5fb7f8bed/volumes" Jan 30 07:14:13 crc kubenswrapper[4520]: I0130 07:14:13.529069 4520 generic.go:334] "Generic (PLEG): container finished" podID="6fc50153-07b9-4447-8e53-e808738a424e" containerID="29a2482b6f411027b2180bac39d71a098fbb43f5617c8c9ab338b686705e554f" exitCode=0 Jan 30 07:14:13 crc kubenswrapper[4520]: I0130 07:14:13.529135 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzd6c" event={"ID":"6fc50153-07b9-4447-8e53-e808738a424e","Type":"ContainerDied","Data":"29a2482b6f411027b2180bac39d71a098fbb43f5617c8c9ab338b686705e554f"} Jan 30 07:14:14 crc kubenswrapper[4520]: I0130 07:14:14.874642 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzd6c" Jan 30 07:14:14 crc kubenswrapper[4520]: I0130 07:14:14.953853 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fc50153-07b9-4447-8e53-e808738a424e-inventory\") pod \"6fc50153-07b9-4447-8e53-e808738a424e\" (UID: \"6fc50153-07b9-4447-8e53-e808738a424e\") " Jan 30 07:14:14 crc kubenswrapper[4520]: I0130 07:14:14.953959 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fc50153-07b9-4447-8e53-e808738a424e-ssh-key-openstack-edpm-ipam\") pod \"6fc50153-07b9-4447-8e53-e808738a424e\" (UID: \"6fc50153-07b9-4447-8e53-e808738a424e\") " Jan 30 07:14:14 crc kubenswrapper[4520]: I0130 07:14:14.954036 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvgn4\" (UniqueName: \"kubernetes.io/projected/6fc50153-07b9-4447-8e53-e808738a424e-kube-api-access-rvgn4\") pod \"6fc50153-07b9-4447-8e53-e808738a424e\" (UID: \"6fc50153-07b9-4447-8e53-e808738a424e\") " Jan 30 07:14:14 crc kubenswrapper[4520]: I0130 07:14:14.958803 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fc50153-07b9-4447-8e53-e808738a424e-kube-api-access-rvgn4" (OuterVolumeSpecName: "kube-api-access-rvgn4") pod "6fc50153-07b9-4447-8e53-e808738a424e" (UID: "6fc50153-07b9-4447-8e53-e808738a424e"). InnerVolumeSpecName "kube-api-access-rvgn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:14:14 crc kubenswrapper[4520]: I0130 07:14:14.975252 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fc50153-07b9-4447-8e53-e808738a424e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6fc50153-07b9-4447-8e53-e808738a424e" (UID: "6fc50153-07b9-4447-8e53-e808738a424e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:14:14 crc kubenswrapper[4520]: I0130 07:14:14.975689 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fc50153-07b9-4447-8e53-e808738a424e-inventory" (OuterVolumeSpecName: "inventory") pod "6fc50153-07b9-4447-8e53-e808738a424e" (UID: "6fc50153-07b9-4447-8e53-e808738a424e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:14:15 crc kubenswrapper[4520]: I0130 07:14:15.056423 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvgn4\" (UniqueName: \"kubernetes.io/projected/6fc50153-07b9-4447-8e53-e808738a424e-kube-api-access-rvgn4\") on node \"crc\" DevicePath \"\"" Jan 30 07:14:15 crc kubenswrapper[4520]: I0130 07:14:15.056448 4520 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fc50153-07b9-4447-8e53-e808738a424e-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 07:14:15 crc kubenswrapper[4520]: I0130 07:14:15.056458 4520 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fc50153-07b9-4447-8e53-e808738a424e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 07:14:15 crc kubenswrapper[4520]: I0130 07:14:15.542916 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzd6c" event={"ID":"6fc50153-07b9-4447-8e53-e808738a424e","Type":"ContainerDied","Data":"c7ea5795a20b14af5291e443510d7572ef1565087f7526da8257497d53601923"} Jan 30 07:14:15 crc kubenswrapper[4520]: I0130 07:14:15.542997 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7ea5795a20b14af5291e443510d7572ef1565087f7526da8257497d53601923" Jan 30 07:14:15 crc kubenswrapper[4520]: I0130 07:14:15.542949 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pzd6c" Jan 30 07:14:15 crc kubenswrapper[4520]: I0130 07:14:15.620251 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-z4brq"] Jan 30 07:14:15 crc kubenswrapper[4520]: E0130 07:14:15.620792 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fc50153-07b9-4447-8e53-e808738a424e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 30 07:14:15 crc kubenswrapper[4520]: I0130 07:14:15.620818 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fc50153-07b9-4447-8e53-e808738a424e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 30 07:14:15 crc kubenswrapper[4520]: I0130 07:14:15.620987 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fc50153-07b9-4447-8e53-e808738a424e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 30 07:14:15 crc kubenswrapper[4520]: I0130 07:14:15.621570 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-z4brq" Jan 30 07:14:15 crc kubenswrapper[4520]: I0130 07:14:15.627185 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 07:14:15 crc kubenswrapper[4520]: I0130 07:14:15.628154 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 07:14:15 crc kubenswrapper[4520]: I0130 07:14:15.628243 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r7s58" Jan 30 07:14:15 crc kubenswrapper[4520]: I0130 07:14:15.628256 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 07:14:15 crc kubenswrapper[4520]: I0130 07:14:15.638655 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-z4brq"] Jan 30 07:14:15 crc kubenswrapper[4520]: I0130 07:14:15.664792 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s76mm\" (UniqueName: \"kubernetes.io/projected/70ef840a-7684-4f34-b525-7702d156cc56-kube-api-access-s76mm\") pod \"ssh-known-hosts-edpm-deployment-z4brq\" (UID: \"70ef840a-7684-4f34-b525-7702d156cc56\") " pod="openstack/ssh-known-hosts-edpm-deployment-z4brq" Jan 30 07:14:15 crc kubenswrapper[4520]: I0130 07:14:15.665063 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70ef840a-7684-4f34-b525-7702d156cc56-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-z4brq\" (UID: \"70ef840a-7684-4f34-b525-7702d156cc56\") " pod="openstack/ssh-known-hosts-edpm-deployment-z4brq" Jan 30 07:14:15 crc kubenswrapper[4520]: I0130 07:14:15.665120 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/70ef840a-7684-4f34-b525-7702d156cc56-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-z4brq\" (UID: \"70ef840a-7684-4f34-b525-7702d156cc56\") " pod="openstack/ssh-known-hosts-edpm-deployment-z4brq" Jan 30 07:14:15 crc kubenswrapper[4520]: I0130 07:14:15.767879 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s76mm\" (UniqueName: \"kubernetes.io/projected/70ef840a-7684-4f34-b525-7702d156cc56-kube-api-access-s76mm\") pod \"ssh-known-hosts-edpm-deployment-z4brq\" (UID: \"70ef840a-7684-4f34-b525-7702d156cc56\") " pod="openstack/ssh-known-hosts-edpm-deployment-z4brq" Jan 30 07:14:15 crc kubenswrapper[4520]: I0130 07:14:15.767970 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70ef840a-7684-4f34-b525-7702d156cc56-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-z4brq\" (UID: \"70ef840a-7684-4f34-b525-7702d156cc56\") " pod="openstack/ssh-known-hosts-edpm-deployment-z4brq" Jan 30 07:14:15 crc kubenswrapper[4520]: I0130 07:14:15.768013 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/70ef840a-7684-4f34-b525-7702d156cc56-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-z4brq\" (UID: \"70ef840a-7684-4f34-b525-7702d156cc56\") " pod="openstack/ssh-known-hosts-edpm-deployment-z4brq" Jan 30 07:14:15 crc kubenswrapper[4520]: I0130 07:14:15.773948 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/70ef840a-7684-4f34-b525-7702d156cc56-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-z4brq\" (UID: \"70ef840a-7684-4f34-b525-7702d156cc56\") " pod="openstack/ssh-known-hosts-edpm-deployment-z4brq" Jan 30 07:14:15 crc kubenswrapper[4520]: I0130 07:14:15.775257 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70ef840a-7684-4f34-b525-7702d156cc56-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-z4brq\" (UID: \"70ef840a-7684-4f34-b525-7702d156cc56\") " pod="openstack/ssh-known-hosts-edpm-deployment-z4brq" Jan 30 07:14:15 crc kubenswrapper[4520]: I0130 07:14:15.786607 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s76mm\" (UniqueName: \"kubernetes.io/projected/70ef840a-7684-4f34-b525-7702d156cc56-kube-api-access-s76mm\") pod \"ssh-known-hosts-edpm-deployment-z4brq\" (UID: \"70ef840a-7684-4f34-b525-7702d156cc56\") " pod="openstack/ssh-known-hosts-edpm-deployment-z4brq" Jan 30 07:14:15 crc kubenswrapper[4520]: I0130 07:14:15.937933 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-z4brq" Jan 30 07:14:16 crc kubenswrapper[4520]: I0130 07:14:16.433747 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-z4brq"] Jan 30 07:14:16 crc kubenswrapper[4520]: I0130 07:14:16.435564 4520 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 07:14:16 crc kubenswrapper[4520]: I0130 07:14:16.554757 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-z4brq" event={"ID":"70ef840a-7684-4f34-b525-7702d156cc56","Type":"ContainerStarted","Data":"c1252457646edc76b0e2c91f84c100749f8f7be9210d9b6c39e988dee2b23554"} Jan 30 07:14:17 crc kubenswrapper[4520]: I0130 07:14:17.566205 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-z4brq" event={"ID":"70ef840a-7684-4f34-b525-7702d156cc56","Type":"ContainerStarted","Data":"3e195e49b4a14a339359b620382ddcf8ed4ab0875b58831bf82cd3dd309d8f9b"} Jan 30 07:14:17 crc kubenswrapper[4520]: I0130 07:14:17.583273 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-z4brq" podStartSLOduration=1.847930088 podStartE2EDuration="2.583258259s" podCreationTimestamp="2026-01-30 07:14:15 +0000 UTC" firstStartedPulling="2026-01-30 07:14:16.435321396 +0000 UTC m=+1770.063673576" lastFinishedPulling="2026-01-30 07:14:17.170649565 +0000 UTC m=+1770.799001747" observedRunningTime="2026-01-30 07:14:17.580848248 +0000 UTC m=+1771.209200429" watchObservedRunningTime="2026-01-30 07:14:17.583258259 +0000 UTC m=+1771.211610440" Jan 30 07:14:20 crc kubenswrapper[4520]: I0130 07:14:20.685704 4520 scope.go:117] "RemoveContainer" containerID="3511c403ecc0670dedcbeb455988f781d984e79ff36ca09f0a0274a95f203ca7" Jan 30 07:14:20 crc kubenswrapper[4520]: E0130 07:14:20.687363 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:14:22 crc kubenswrapper[4520]: I0130 07:14:22.599741 4520 generic.go:334] "Generic (PLEG): container finished" podID="70ef840a-7684-4f34-b525-7702d156cc56" containerID="3e195e49b4a14a339359b620382ddcf8ed4ab0875b58831bf82cd3dd309d8f9b" exitCode=0 Jan 30 07:14:22 crc kubenswrapper[4520]: I0130 07:14:22.599807 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-z4brq" event={"ID":"70ef840a-7684-4f34-b525-7702d156cc56","Type":"ContainerDied","Data":"3e195e49b4a14a339359b620382ddcf8ed4ab0875b58831bf82cd3dd309d8f9b"} Jan 30 07:14:23 crc kubenswrapper[4520]: I0130 07:14:23.910876 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-z4brq" Jan 30 07:14:24 crc kubenswrapper[4520]: I0130 07:14:24.030786 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s76mm\" (UniqueName: \"kubernetes.io/projected/70ef840a-7684-4f34-b525-7702d156cc56-kube-api-access-s76mm\") pod \"70ef840a-7684-4f34-b525-7702d156cc56\" (UID: \"70ef840a-7684-4f34-b525-7702d156cc56\") " Jan 30 07:14:24 crc kubenswrapper[4520]: I0130 07:14:24.031037 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70ef840a-7684-4f34-b525-7702d156cc56-ssh-key-openstack-edpm-ipam\") pod \"70ef840a-7684-4f34-b525-7702d156cc56\" (UID: \"70ef840a-7684-4f34-b525-7702d156cc56\") " Jan 30 07:14:24 crc kubenswrapper[4520]: I0130 07:14:24.031111 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/70ef840a-7684-4f34-b525-7702d156cc56-inventory-0\") pod \"70ef840a-7684-4f34-b525-7702d156cc56\" (UID: \"70ef840a-7684-4f34-b525-7702d156cc56\") " Jan 30 07:14:24 crc kubenswrapper[4520]: I0130 07:14:24.036602 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70ef840a-7684-4f34-b525-7702d156cc56-kube-api-access-s76mm" (OuterVolumeSpecName: "kube-api-access-s76mm") pod "70ef840a-7684-4f34-b525-7702d156cc56" (UID: "70ef840a-7684-4f34-b525-7702d156cc56"). InnerVolumeSpecName "kube-api-access-s76mm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:14:24 crc kubenswrapper[4520]: I0130 07:14:24.056982 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70ef840a-7684-4f34-b525-7702d156cc56-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "70ef840a-7684-4f34-b525-7702d156cc56" (UID: "70ef840a-7684-4f34-b525-7702d156cc56"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:14:24 crc kubenswrapper[4520]: I0130 07:14:24.059232 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70ef840a-7684-4f34-b525-7702d156cc56-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "70ef840a-7684-4f34-b525-7702d156cc56" (UID: "70ef840a-7684-4f34-b525-7702d156cc56"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:14:24 crc kubenswrapper[4520]: I0130 07:14:24.133171 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s76mm\" (UniqueName: \"kubernetes.io/projected/70ef840a-7684-4f34-b525-7702d156cc56-kube-api-access-s76mm\") on node \"crc\" DevicePath \"\"" Jan 30 07:14:24 crc kubenswrapper[4520]: I0130 07:14:24.133199 4520 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70ef840a-7684-4f34-b525-7702d156cc56-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 07:14:24 crc kubenswrapper[4520]: I0130 07:14:24.133210 4520 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/70ef840a-7684-4f34-b525-7702d156cc56-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 30 07:14:24 crc kubenswrapper[4520]: I0130 07:14:24.613918 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-z4brq" event={"ID":"70ef840a-7684-4f34-b525-7702d156cc56","Type":"ContainerDied","Data":"c1252457646edc76b0e2c91f84c100749f8f7be9210d9b6c39e988dee2b23554"} Jan 30 07:14:24 crc kubenswrapper[4520]: I0130 07:14:24.613959 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1252457646edc76b0e2c91f84c100749f8f7be9210d9b6c39e988dee2b23554" Jan 30 07:14:24 crc kubenswrapper[4520]: I0130 07:14:24.613963 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-z4brq" Jan 30 07:14:24 crc kubenswrapper[4520]: I0130 07:14:24.671165 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf657"] Jan 30 07:14:24 crc kubenswrapper[4520]: E0130 07:14:24.675393 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70ef840a-7684-4f34-b525-7702d156cc56" containerName="ssh-known-hosts-edpm-deployment" Jan 30 07:14:24 crc kubenswrapper[4520]: I0130 07:14:24.675416 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="70ef840a-7684-4f34-b525-7702d156cc56" containerName="ssh-known-hosts-edpm-deployment" Jan 30 07:14:24 crc kubenswrapper[4520]: I0130 07:14:24.675615 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="70ef840a-7684-4f34-b525-7702d156cc56" containerName="ssh-known-hosts-edpm-deployment" Jan 30 07:14:24 crc kubenswrapper[4520]: I0130 07:14:24.676241 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf657" Jan 30 07:14:24 crc kubenswrapper[4520]: I0130 07:14:24.679656 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 07:14:24 crc kubenswrapper[4520]: I0130 07:14:24.679704 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r7s58" Jan 30 07:14:24 crc kubenswrapper[4520]: I0130 07:14:24.679669 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 07:14:24 crc kubenswrapper[4520]: I0130 07:14:24.679908 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 07:14:24 crc kubenswrapper[4520]: I0130 07:14:24.682206 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf657"] Jan 30 07:14:24 crc kubenswrapper[4520]: I0130 07:14:24.743636 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6232e5d4-bc1a-456a-b1cd-b01c732255f5-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rf657\" (UID: \"6232e5d4-bc1a-456a-b1cd-b01c732255f5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf657" Jan 30 07:14:24 crc kubenswrapper[4520]: I0130 07:14:24.743788 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6tj9\" (UniqueName: \"kubernetes.io/projected/6232e5d4-bc1a-456a-b1cd-b01c732255f5-kube-api-access-x6tj9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rf657\" (UID: \"6232e5d4-bc1a-456a-b1cd-b01c732255f5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf657" Jan 30 07:14:24 crc kubenswrapper[4520]: I0130 07:14:24.743882 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6232e5d4-bc1a-456a-b1cd-b01c732255f5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rf657\" (UID: \"6232e5d4-bc1a-456a-b1cd-b01c732255f5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf657" Jan 30 07:14:24 crc kubenswrapper[4520]: I0130 07:14:24.846083 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6232e5d4-bc1a-456a-b1cd-b01c732255f5-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rf657\" (UID: \"6232e5d4-bc1a-456a-b1cd-b01c732255f5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf657" Jan 30 07:14:24 crc kubenswrapper[4520]: I0130 07:14:24.846139 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6tj9\" (UniqueName: \"kubernetes.io/projected/6232e5d4-bc1a-456a-b1cd-b01c732255f5-kube-api-access-x6tj9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rf657\" (UID: \"6232e5d4-bc1a-456a-b1cd-b01c732255f5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf657" Jan 30 07:14:24 crc kubenswrapper[4520]: I0130 07:14:24.846173 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6232e5d4-bc1a-456a-b1cd-b01c732255f5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rf657\" (UID: \"6232e5d4-bc1a-456a-b1cd-b01c732255f5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf657" Jan 30 07:14:24 crc kubenswrapper[4520]: I0130 07:14:24.849830 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6232e5d4-bc1a-456a-b1cd-b01c732255f5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rf657\" (UID: \"6232e5d4-bc1a-456a-b1cd-b01c732255f5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf657" Jan 30 07:14:24 crc kubenswrapper[4520]: I0130 07:14:24.850944 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6232e5d4-bc1a-456a-b1cd-b01c732255f5-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rf657\" (UID: \"6232e5d4-bc1a-456a-b1cd-b01c732255f5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf657" Jan 30 07:14:24 crc kubenswrapper[4520]: I0130 07:14:24.861748 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6tj9\" (UniqueName: \"kubernetes.io/projected/6232e5d4-bc1a-456a-b1cd-b01c732255f5-kube-api-access-x6tj9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rf657\" (UID: \"6232e5d4-bc1a-456a-b1cd-b01c732255f5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf657" Jan 30 07:14:25 crc kubenswrapper[4520]: I0130 07:14:25.014173 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf657" Jan 30 07:14:25 crc kubenswrapper[4520]: I0130 07:14:25.433359 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf657"] Jan 30 07:14:25 crc kubenswrapper[4520]: I0130 07:14:25.620700 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf657" event={"ID":"6232e5d4-bc1a-456a-b1cd-b01c732255f5","Type":"ContainerStarted","Data":"f3dade86b75729d47d73c6e935e2835b96bf7afdaa0d619d5b130443db2e0de0"} Jan 30 07:14:26 crc kubenswrapper[4520]: I0130 07:14:26.628763 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf657" event={"ID":"6232e5d4-bc1a-456a-b1cd-b01c732255f5","Type":"ContainerStarted","Data":"b8b496aaeb0501fb4e3cb0da56ec11f024d748c4ee02e1986811ef1446b03ca7"} Jan 30 07:14:26 crc kubenswrapper[4520]: I0130 07:14:26.649434 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf657" podStartSLOduration=2.167537793 podStartE2EDuration="2.649420983s" podCreationTimestamp="2026-01-30 07:14:24 +0000 UTC" firstStartedPulling="2026-01-30 07:14:25.440651455 +0000 UTC m=+1779.069003637" lastFinishedPulling="2026-01-30 07:14:25.922534646 +0000 UTC m=+1779.550886827" observedRunningTime="2026-01-30 07:14:26.648927845 +0000 UTC m=+1780.277280026" watchObservedRunningTime="2026-01-30 07:14:26.649420983 +0000 UTC m=+1780.277773164" Jan 30 07:14:32 crc kubenswrapper[4520]: I0130 07:14:32.673037 4520 generic.go:334] "Generic (PLEG): container finished" podID="6232e5d4-bc1a-456a-b1cd-b01c732255f5" containerID="b8b496aaeb0501fb4e3cb0da56ec11f024d748c4ee02e1986811ef1446b03ca7" exitCode=0 Jan 30 07:14:32 crc kubenswrapper[4520]: I0130 07:14:32.673119 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf657" event={"ID":"6232e5d4-bc1a-456a-b1cd-b01c732255f5","Type":"ContainerDied","Data":"b8b496aaeb0501fb4e3cb0da56ec11f024d748c4ee02e1986811ef1446b03ca7"} Jan 30 07:14:32 crc kubenswrapper[4520]: I0130 07:14:32.687132 4520 scope.go:117] "RemoveContainer" containerID="3511c403ecc0670dedcbeb455988f781d984e79ff36ca09f0a0274a95f203ca7" Jan 30 07:14:32 crc kubenswrapper[4520]: E0130 07:14:32.687738 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:14:33 crc kubenswrapper[4520]: I0130 07:14:33.934603 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf657" Jan 30 07:14:34 crc kubenswrapper[4520]: I0130 07:14:34.091166 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6232e5d4-bc1a-456a-b1cd-b01c732255f5-inventory\") pod \"6232e5d4-bc1a-456a-b1cd-b01c732255f5\" (UID: \"6232e5d4-bc1a-456a-b1cd-b01c732255f5\") " Jan 30 07:14:34 crc kubenswrapper[4520]: I0130 07:14:34.091340 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6tj9\" (UniqueName: \"kubernetes.io/projected/6232e5d4-bc1a-456a-b1cd-b01c732255f5-kube-api-access-x6tj9\") pod \"6232e5d4-bc1a-456a-b1cd-b01c732255f5\" (UID: \"6232e5d4-bc1a-456a-b1cd-b01c732255f5\") " Jan 30 07:14:34 crc kubenswrapper[4520]: I0130 07:14:34.091493 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6232e5d4-bc1a-456a-b1cd-b01c732255f5-ssh-key-openstack-edpm-ipam\") pod \"6232e5d4-bc1a-456a-b1cd-b01c732255f5\" (UID: \"6232e5d4-bc1a-456a-b1cd-b01c732255f5\") " Jan 30 07:14:34 crc kubenswrapper[4520]: I0130 07:14:34.098625 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6232e5d4-bc1a-456a-b1cd-b01c732255f5-kube-api-access-x6tj9" (OuterVolumeSpecName: "kube-api-access-x6tj9") pod "6232e5d4-bc1a-456a-b1cd-b01c732255f5" (UID: "6232e5d4-bc1a-456a-b1cd-b01c732255f5"). InnerVolumeSpecName "kube-api-access-x6tj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:14:34 crc kubenswrapper[4520]: I0130 07:14:34.111398 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6232e5d4-bc1a-456a-b1cd-b01c732255f5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6232e5d4-bc1a-456a-b1cd-b01c732255f5" (UID: "6232e5d4-bc1a-456a-b1cd-b01c732255f5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:14:34 crc kubenswrapper[4520]: I0130 07:14:34.112620 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6232e5d4-bc1a-456a-b1cd-b01c732255f5-inventory" (OuterVolumeSpecName: "inventory") pod "6232e5d4-bc1a-456a-b1cd-b01c732255f5" (UID: "6232e5d4-bc1a-456a-b1cd-b01c732255f5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:14:34 crc kubenswrapper[4520]: I0130 07:14:34.194345 4520 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6232e5d4-bc1a-456a-b1cd-b01c732255f5-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 07:14:34 crc kubenswrapper[4520]: I0130 07:14:34.194566 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6tj9\" (UniqueName: \"kubernetes.io/projected/6232e5d4-bc1a-456a-b1cd-b01c732255f5-kube-api-access-x6tj9\") on node \"crc\" DevicePath \"\"" Jan 30 07:14:34 crc kubenswrapper[4520]: I0130 07:14:34.194630 4520 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6232e5d4-bc1a-456a-b1cd-b01c732255f5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 07:14:34 crc kubenswrapper[4520]: I0130 07:14:34.686391 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf657" Jan 30 07:14:34 crc kubenswrapper[4520]: I0130 07:14:34.692619 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf657" event={"ID":"6232e5d4-bc1a-456a-b1cd-b01c732255f5","Type":"ContainerDied","Data":"f3dade86b75729d47d73c6e935e2835b96bf7afdaa0d619d5b130443db2e0de0"} Jan 30 07:14:34 crc kubenswrapper[4520]: I0130 07:14:34.692652 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3dade86b75729d47d73c6e935e2835b96bf7afdaa0d619d5b130443db2e0de0" Jan 30 07:14:34 crc kubenswrapper[4520]: I0130 07:14:34.790934 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgbsf"] Jan 30 07:14:34 crc kubenswrapper[4520]: E0130 07:14:34.791261 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6232e5d4-bc1a-456a-b1cd-b01c732255f5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 30 07:14:34 crc kubenswrapper[4520]: I0130 07:14:34.791280 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="6232e5d4-bc1a-456a-b1cd-b01c732255f5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 30 07:14:34 crc kubenswrapper[4520]: I0130 07:14:34.791465 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="6232e5d4-bc1a-456a-b1cd-b01c732255f5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 30 07:14:34 crc kubenswrapper[4520]: I0130 07:14:34.792001 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgbsf" Jan 30 07:14:34 crc kubenswrapper[4520]: I0130 07:14:34.798608 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 07:14:34 crc kubenswrapper[4520]: I0130 07:14:34.798945 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 07:14:34 crc kubenswrapper[4520]: I0130 07:14:34.799327 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 07:14:34 crc kubenswrapper[4520]: I0130 07:14:34.799828 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r7s58" Jan 30 07:14:34 crc kubenswrapper[4520]: I0130 07:14:34.818919 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgbsf"] Jan 30 07:14:34 crc kubenswrapper[4520]: I0130 07:14:34.905619 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxnl5\" (UniqueName: \"kubernetes.io/projected/b10b3ce6-65c2-450d-b903-6c42aa05c2e8-kube-api-access-qxnl5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vgbsf\" (UID: \"b10b3ce6-65c2-450d-b903-6c42aa05c2e8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgbsf" Jan 30 07:14:34 crc kubenswrapper[4520]: I0130 07:14:34.905807 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b10b3ce6-65c2-450d-b903-6c42aa05c2e8-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vgbsf\" (UID: \"b10b3ce6-65c2-450d-b903-6c42aa05c2e8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgbsf" Jan 30 07:14:34 crc kubenswrapper[4520]: I0130 07:14:34.905919 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b10b3ce6-65c2-450d-b903-6c42aa05c2e8-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vgbsf\" (UID: \"b10b3ce6-65c2-450d-b903-6c42aa05c2e8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgbsf" Jan 30 07:14:35 crc kubenswrapper[4520]: I0130 07:14:35.007611 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxnl5\" (UniqueName: \"kubernetes.io/projected/b10b3ce6-65c2-450d-b903-6c42aa05c2e8-kube-api-access-qxnl5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vgbsf\" (UID: \"b10b3ce6-65c2-450d-b903-6c42aa05c2e8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgbsf" Jan 30 07:14:35 crc kubenswrapper[4520]: I0130 07:14:35.007752 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b10b3ce6-65c2-450d-b903-6c42aa05c2e8-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vgbsf\" (UID: \"b10b3ce6-65c2-450d-b903-6c42aa05c2e8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgbsf" Jan 30 07:14:35 crc kubenswrapper[4520]: I0130 07:14:35.007848 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b10b3ce6-65c2-450d-b903-6c42aa05c2e8-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vgbsf\" (UID: \"b10b3ce6-65c2-450d-b903-6c42aa05c2e8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgbsf" Jan 30 07:14:35 crc kubenswrapper[4520]: I0130 07:14:35.010674 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b10b3ce6-65c2-450d-b903-6c42aa05c2e8-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vgbsf\" (UID: \"b10b3ce6-65c2-450d-b903-6c42aa05c2e8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgbsf" Jan 30 07:14:35 crc kubenswrapper[4520]: I0130 07:14:35.010785 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b10b3ce6-65c2-450d-b903-6c42aa05c2e8-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vgbsf\" (UID: \"b10b3ce6-65c2-450d-b903-6c42aa05c2e8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgbsf" Jan 30 07:14:35 crc kubenswrapper[4520]: I0130 07:14:35.020303 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxnl5\" (UniqueName: \"kubernetes.io/projected/b10b3ce6-65c2-450d-b903-6c42aa05c2e8-kube-api-access-qxnl5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vgbsf\" (UID: \"b10b3ce6-65c2-450d-b903-6c42aa05c2e8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgbsf" Jan 30 07:14:35 crc kubenswrapper[4520]: I0130 07:14:35.107194 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgbsf" Jan 30 07:14:35 crc kubenswrapper[4520]: I0130 07:14:35.555079 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgbsf"] Jan 30 07:14:35 crc kubenswrapper[4520]: I0130 07:14:35.692493 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgbsf" event={"ID":"b10b3ce6-65c2-450d-b903-6c42aa05c2e8","Type":"ContainerStarted","Data":"2d08213c81b27b43d2d5e9cca48e2258d6b8fe84a2c018d3a8b9ca7280ca3f0c"} Jan 30 07:14:36 crc kubenswrapper[4520]: I0130 07:14:36.698586 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgbsf" event={"ID":"b10b3ce6-65c2-450d-b903-6c42aa05c2e8","Type":"ContainerStarted","Data":"ca03766ce7fe5cf2c2d672403e93db0d8d10e8d4878e1ce34055a991bab4aaed"} Jan 30 07:14:36 crc kubenswrapper[4520]: I0130 07:14:36.720540 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgbsf" podStartSLOduration=2.239774868 podStartE2EDuration="2.720526029s" podCreationTimestamp="2026-01-30 07:14:34 +0000 UTC" firstStartedPulling="2026-01-30 07:14:35.548676743 +0000 UTC m=+1789.177028924" lastFinishedPulling="2026-01-30 07:14:36.029427904 +0000 UTC m=+1789.657780085" observedRunningTime="2026-01-30 07:14:36.718977027 +0000 UTC m=+1790.347329207" watchObservedRunningTime="2026-01-30 07:14:36.720526029 +0000 UTC m=+1790.348878210" Jan 30 07:14:44 crc kubenswrapper[4520]: I0130 07:14:44.762589 4520 generic.go:334] "Generic (PLEG): container finished" podID="b10b3ce6-65c2-450d-b903-6c42aa05c2e8" containerID="ca03766ce7fe5cf2c2d672403e93db0d8d10e8d4878e1ce34055a991bab4aaed" exitCode=0 Jan 30 07:14:44 crc kubenswrapper[4520]: I0130 07:14:44.762646 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgbsf" event={"ID":"b10b3ce6-65c2-450d-b903-6c42aa05c2e8","Type":"ContainerDied","Data":"ca03766ce7fe5cf2c2d672403e93db0d8d10e8d4878e1ce34055a991bab4aaed"} Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.109892 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgbsf" Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.311358 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b10b3ce6-65c2-450d-b903-6c42aa05c2e8-inventory\") pod \"b10b3ce6-65c2-450d-b903-6c42aa05c2e8\" (UID: \"b10b3ce6-65c2-450d-b903-6c42aa05c2e8\") " Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.311507 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b10b3ce6-65c2-450d-b903-6c42aa05c2e8-ssh-key-openstack-edpm-ipam\") pod \"b10b3ce6-65c2-450d-b903-6c42aa05c2e8\" (UID: \"b10b3ce6-65c2-450d-b903-6c42aa05c2e8\") " Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.311650 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxnl5\" (UniqueName: \"kubernetes.io/projected/b10b3ce6-65c2-450d-b903-6c42aa05c2e8-kube-api-access-qxnl5\") pod \"b10b3ce6-65c2-450d-b903-6c42aa05c2e8\" (UID: \"b10b3ce6-65c2-450d-b903-6c42aa05c2e8\") " Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.316868 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b10b3ce6-65c2-450d-b903-6c42aa05c2e8-kube-api-access-qxnl5" (OuterVolumeSpecName: "kube-api-access-qxnl5") pod "b10b3ce6-65c2-450d-b903-6c42aa05c2e8" (UID: "b10b3ce6-65c2-450d-b903-6c42aa05c2e8"). InnerVolumeSpecName "kube-api-access-qxnl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.333023 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b10b3ce6-65c2-450d-b903-6c42aa05c2e8-inventory" (OuterVolumeSpecName: "inventory") pod "b10b3ce6-65c2-450d-b903-6c42aa05c2e8" (UID: "b10b3ce6-65c2-450d-b903-6c42aa05c2e8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.336082 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b10b3ce6-65c2-450d-b903-6c42aa05c2e8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b10b3ce6-65c2-450d-b903-6c42aa05c2e8" (UID: "b10b3ce6-65c2-450d-b903-6c42aa05c2e8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.414405 4520 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b10b3ce6-65c2-450d-b903-6c42aa05c2e8-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.414437 4520 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b10b3ce6-65c2-450d-b903-6c42aa05c2e8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.414448 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxnl5\" (UniqueName: \"kubernetes.io/projected/b10b3ce6-65c2-450d-b903-6c42aa05c2e8-kube-api-access-qxnl5\") on node \"crc\" DevicePath \"\"" Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.778885 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgbsf" event={"ID":"b10b3ce6-65c2-450d-b903-6c42aa05c2e8","Type":"ContainerDied","Data":"2d08213c81b27b43d2d5e9cca48e2258d6b8fe84a2c018d3a8b9ca7280ca3f0c"} Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.779183 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d08213c81b27b43d2d5e9cca48e2258d6b8fe84a2c018d3a8b9ca7280ca3f0c" Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.778997 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vgbsf" Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.862867 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v"] Jan 30 07:14:46 crc kubenswrapper[4520]: E0130 07:14:46.863574 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b10b3ce6-65c2-450d-b903-6c42aa05c2e8" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.863602 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="b10b3ce6-65c2-450d-b903-6c42aa05c2e8" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.863852 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="b10b3ce6-65c2-450d-b903-6c42aa05c2e8" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.864849 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.870850 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.871858 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.872130 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.873643 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.873813 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.873939 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.873945 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.876926 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r7s58" Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.881710 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v"] Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.926614 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.926657 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.926685 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.926711 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9160234d-a948-4513-85bc-a3bb4f7a54fc-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.926766 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.926784 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9160234d-a948-4513-85bc-a3bb4f7a54fc-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.926804 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.926822 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.926858 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.926879 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9160234d-a948-4513-85bc-a3bb4f7a54fc-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.926899 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9160234d-a948-4513-85bc-a3bb4f7a54fc-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.926926 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.926975 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98ts5\" (UniqueName: \"kubernetes.io/projected/9160234d-a948-4513-85bc-a3bb4f7a54fc-kube-api-access-98ts5\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:46 crc kubenswrapper[4520]: I0130 07:14:46.927028 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:47 crc kubenswrapper[4520]: I0130 07:14:47.029202 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:47 crc kubenswrapper[4520]: I0130 07:14:47.029424 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:47 crc kubenswrapper[4520]: I0130 07:14:47.029532 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:47 crc kubenswrapper[4520]: I0130 07:14:47.029615 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9160234d-a948-4513-85bc-a3bb4f7a54fc-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:47 crc kubenswrapper[4520]: I0130 07:14:47.029743 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:47 crc kubenswrapper[4520]: I0130 07:14:47.030378 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9160234d-a948-4513-85bc-a3bb4f7a54fc-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:47 crc kubenswrapper[4520]: I0130 07:14:47.030462 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:47 crc kubenswrapper[4520]: I0130 07:14:47.030600 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:47 crc kubenswrapper[4520]: I0130 07:14:47.030692 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:47 crc kubenswrapper[4520]: I0130 07:14:47.030771 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9160234d-a948-4513-85bc-a3bb4f7a54fc-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:47 crc kubenswrapper[4520]: I0130 07:14:47.030854 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9160234d-a948-4513-85bc-a3bb4f7a54fc-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:47 crc kubenswrapper[4520]: I0130 07:14:47.030935 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:47 crc kubenswrapper[4520]: I0130 07:14:47.031013 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98ts5\" (UniqueName: \"kubernetes.io/projected/9160234d-a948-4513-85bc-a3bb4f7a54fc-kube-api-access-98ts5\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:47 crc kubenswrapper[4520]: I0130 07:14:47.031119 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:47 crc kubenswrapper[4520]: I0130 07:14:47.033838 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:47 crc kubenswrapper[4520]: I0130 07:14:47.034435 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9160234d-a948-4513-85bc-a3bb4f7a54fc-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:47 crc kubenswrapper[4520]: I0130 07:14:47.035369 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:47 crc kubenswrapper[4520]: I0130 07:14:47.036990 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:47 crc kubenswrapper[4520]: I0130 07:14:47.037321 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:47 crc kubenswrapper[4520]: I0130 07:14:47.037478 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:47 crc kubenswrapper[4520]: I0130 07:14:47.038211 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9160234d-a948-4513-85bc-a3bb4f7a54fc-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:47 crc kubenswrapper[4520]: I0130 07:14:47.039018 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:47 crc kubenswrapper[4520]: I0130 07:14:47.039354 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:47 crc kubenswrapper[4520]: I0130 07:14:47.039822 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:47 crc kubenswrapper[4520]: I0130 07:14:47.042182 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:47 crc kubenswrapper[4520]: I0130 07:14:47.042464 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9160234d-a948-4513-85bc-a3bb4f7a54fc-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:47 crc kubenswrapper[4520]: I0130 07:14:47.042894 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9160234d-a948-4513-85bc-a3bb4f7a54fc-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:47 crc kubenswrapper[4520]: I0130 07:14:47.046945 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98ts5\" (UniqueName: \"kubernetes.io/projected/9160234d-a948-4513-85bc-a3bb4f7a54fc-kube-api-access-98ts5\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:47 crc kubenswrapper[4520]: I0130 07:14:47.189507 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:14:47 crc kubenswrapper[4520]: I0130 07:14:47.645849 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v"] Jan 30 07:14:47 crc kubenswrapper[4520]: W0130 07:14:47.650888 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9160234d_a948_4513_85bc_a3bb4f7a54fc.slice/crio-041eeecf21ad0ca964aa8a272960314bca7157910e9f40f465f14bb6c436e1ba WatchSource:0}: Error finding container 041eeecf21ad0ca964aa8a272960314bca7157910e9f40f465f14bb6c436e1ba: Status 404 returned error can't find the container with id 041eeecf21ad0ca964aa8a272960314bca7157910e9f40f465f14bb6c436e1ba Jan 30 07:14:47 crc kubenswrapper[4520]: I0130 07:14:47.686508 4520 scope.go:117] "RemoveContainer" containerID="3511c403ecc0670dedcbeb455988f781d984e79ff36ca09f0a0274a95f203ca7" Jan 30 07:14:47 crc kubenswrapper[4520]: E0130 07:14:47.686746 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:14:47 crc kubenswrapper[4520]: I0130 07:14:47.788290 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" event={"ID":"9160234d-a948-4513-85bc-a3bb4f7a54fc","Type":"ContainerStarted","Data":"041eeecf21ad0ca964aa8a272960314bca7157910e9f40f465f14bb6c436e1ba"} Jan 30 07:14:48 crc kubenswrapper[4520]: I0130 07:14:48.795878 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" event={"ID":"9160234d-a948-4513-85bc-a3bb4f7a54fc","Type":"ContainerStarted","Data":"de23cd49f55b65d5ff77ff0434a51af2001c4bf277cfee4aaf534c7d0aca4a87"} Jan 30 07:14:48 crc kubenswrapper[4520]: I0130 07:14:48.813978 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" podStartSLOduration=2.302909324 podStartE2EDuration="2.813964286s" podCreationTimestamp="2026-01-30 07:14:46 +0000 UTC" firstStartedPulling="2026-01-30 07:14:47.653821241 +0000 UTC m=+1801.282173423" lastFinishedPulling="2026-01-30 07:14:48.164876204 +0000 UTC m=+1801.793228385" observedRunningTime="2026-01-30 07:14:48.811620048 +0000 UTC m=+1802.439972230" watchObservedRunningTime="2026-01-30 07:14:48.813964286 +0000 UTC m=+1802.442316468" Jan 30 07:14:56 crc kubenswrapper[4520]: I0130 07:14:56.053105 4520 scope.go:117] "RemoveContainer" containerID="ffb33993ee1ab0f2915731e0eb0b9235930223f9e5913bfaa196dcc2c36fd749" Jan 30 07:15:00 crc kubenswrapper[4520]: I0130 07:15:00.135101 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495955-rkfmw"] Jan 30 07:15:00 crc kubenswrapper[4520]: I0130 07:15:00.137068 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495955-rkfmw" Jan 30 07:15:00 crc kubenswrapper[4520]: I0130 07:15:00.138881 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 07:15:00 crc kubenswrapper[4520]: I0130 07:15:00.138906 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 07:15:00 crc kubenswrapper[4520]: I0130 07:15:00.148171 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495955-rkfmw"] Jan 30 07:15:00 crc kubenswrapper[4520]: I0130 07:15:00.186557 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bc3e954a-9302-42c8-a729-5d277eb821fc-secret-volume\") pod \"collect-profiles-29495955-rkfmw\" (UID: \"bc3e954a-9302-42c8-a729-5d277eb821fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495955-rkfmw" Jan 30 07:15:00 crc kubenswrapper[4520]: I0130 07:15:00.186602 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6zf2\" (UniqueName: \"kubernetes.io/projected/bc3e954a-9302-42c8-a729-5d277eb821fc-kube-api-access-k6zf2\") pod \"collect-profiles-29495955-rkfmw\" (UID: \"bc3e954a-9302-42c8-a729-5d277eb821fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495955-rkfmw" Jan 30 07:15:00 crc kubenswrapper[4520]: I0130 07:15:00.186632 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc3e954a-9302-42c8-a729-5d277eb821fc-config-volume\") pod \"collect-profiles-29495955-rkfmw\" (UID: \"bc3e954a-9302-42c8-a729-5d277eb821fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495955-rkfmw" Jan 30 07:15:00 crc kubenswrapper[4520]: I0130 07:15:00.287974 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bc3e954a-9302-42c8-a729-5d277eb821fc-secret-volume\") pod \"collect-profiles-29495955-rkfmw\" (UID: \"bc3e954a-9302-42c8-a729-5d277eb821fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495955-rkfmw" Jan 30 07:15:00 crc kubenswrapper[4520]: I0130 07:15:00.288018 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6zf2\" (UniqueName: \"kubernetes.io/projected/bc3e954a-9302-42c8-a729-5d277eb821fc-kube-api-access-k6zf2\") pod \"collect-profiles-29495955-rkfmw\" (UID: \"bc3e954a-9302-42c8-a729-5d277eb821fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495955-rkfmw" Jan 30 07:15:00 crc kubenswrapper[4520]: I0130 07:15:00.288051 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc3e954a-9302-42c8-a729-5d277eb821fc-config-volume\") pod \"collect-profiles-29495955-rkfmw\" (UID: \"bc3e954a-9302-42c8-a729-5d277eb821fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495955-rkfmw" Jan 30 07:15:00 crc kubenswrapper[4520]: I0130 07:15:00.288821 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc3e954a-9302-42c8-a729-5d277eb821fc-config-volume\") pod \"collect-profiles-29495955-rkfmw\" (UID: \"bc3e954a-9302-42c8-a729-5d277eb821fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495955-rkfmw" Jan 30 07:15:00 crc kubenswrapper[4520]: I0130 07:15:00.294155 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bc3e954a-9302-42c8-a729-5d277eb821fc-secret-volume\") pod \"collect-profiles-29495955-rkfmw\" (UID: \"bc3e954a-9302-42c8-a729-5d277eb821fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495955-rkfmw" Jan 30 07:15:00 crc kubenswrapper[4520]: I0130 07:15:00.303892 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6zf2\" (UniqueName: \"kubernetes.io/projected/bc3e954a-9302-42c8-a729-5d277eb821fc-kube-api-access-k6zf2\") pod \"collect-profiles-29495955-rkfmw\" (UID: \"bc3e954a-9302-42c8-a729-5d277eb821fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495955-rkfmw" Jan 30 07:15:00 crc kubenswrapper[4520]: I0130 07:15:00.453545 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495955-rkfmw" Jan 30 07:15:00 crc kubenswrapper[4520]: I0130 07:15:00.857480 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495955-rkfmw"] Jan 30 07:15:00 crc kubenswrapper[4520]: W0130 07:15:00.863694 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc3e954a_9302_42c8_a729_5d277eb821fc.slice/crio-88822eb438c455d6f1e0d2a5aa0fe04edfd251521684039f87213f53ffdfb366 WatchSource:0}: Error finding container 88822eb438c455d6f1e0d2a5aa0fe04edfd251521684039f87213f53ffdfb366: Status 404 returned error can't find the container with id 88822eb438c455d6f1e0d2a5aa0fe04edfd251521684039f87213f53ffdfb366 Jan 30 07:15:00 crc kubenswrapper[4520]: I0130 07:15:00.894952 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495955-rkfmw" event={"ID":"bc3e954a-9302-42c8-a729-5d277eb821fc","Type":"ContainerStarted","Data":"88822eb438c455d6f1e0d2a5aa0fe04edfd251521684039f87213f53ffdfb366"} Jan 30 07:15:01 crc kubenswrapper[4520]: I0130 07:15:01.904462 4520 generic.go:334] "Generic (PLEG): container finished" podID="bc3e954a-9302-42c8-a729-5d277eb821fc" containerID="e8ad25f22891e130911a01be1386a450861f97eed8b68071c5a6ce19fb4d3fa3" exitCode=0 Jan 30 07:15:01 crc kubenswrapper[4520]: I0130 07:15:01.904607 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495955-rkfmw" event={"ID":"bc3e954a-9302-42c8-a729-5d277eb821fc","Type":"ContainerDied","Data":"e8ad25f22891e130911a01be1386a450861f97eed8b68071c5a6ce19fb4d3fa3"} Jan 30 07:15:02 crc kubenswrapper[4520]: I0130 07:15:02.686167 4520 scope.go:117] "RemoveContainer" containerID="3511c403ecc0670dedcbeb455988f781d984e79ff36ca09f0a0274a95f203ca7" Jan 30 07:15:02 crc kubenswrapper[4520]: E0130 07:15:02.686489 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:15:03 crc kubenswrapper[4520]: I0130 07:15:03.179432 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495955-rkfmw" Jan 30 07:15:03 crc kubenswrapper[4520]: I0130 07:15:03.353196 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6zf2\" (UniqueName: \"kubernetes.io/projected/bc3e954a-9302-42c8-a729-5d277eb821fc-kube-api-access-k6zf2\") pod \"bc3e954a-9302-42c8-a729-5d277eb821fc\" (UID: \"bc3e954a-9302-42c8-a729-5d277eb821fc\") " Jan 30 07:15:03 crc kubenswrapper[4520]: I0130 07:15:03.353321 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bc3e954a-9302-42c8-a729-5d277eb821fc-secret-volume\") pod \"bc3e954a-9302-42c8-a729-5d277eb821fc\" (UID: \"bc3e954a-9302-42c8-a729-5d277eb821fc\") " Jan 30 07:15:03 crc kubenswrapper[4520]: I0130 07:15:03.353396 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc3e954a-9302-42c8-a729-5d277eb821fc-config-volume\") pod \"bc3e954a-9302-42c8-a729-5d277eb821fc\" (UID: \"bc3e954a-9302-42c8-a729-5d277eb821fc\") " Jan 30 07:15:03 crc kubenswrapper[4520]: I0130 07:15:03.353997 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc3e954a-9302-42c8-a729-5d277eb821fc-config-volume" (OuterVolumeSpecName: "config-volume") pod "bc3e954a-9302-42c8-a729-5d277eb821fc" (UID: "bc3e954a-9302-42c8-a729-5d277eb821fc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:15:03 crc kubenswrapper[4520]: I0130 07:15:03.359395 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc3e954a-9302-42c8-a729-5d277eb821fc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bc3e954a-9302-42c8-a729-5d277eb821fc" (UID: "bc3e954a-9302-42c8-a729-5d277eb821fc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:15:03 crc kubenswrapper[4520]: I0130 07:15:03.364629 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc3e954a-9302-42c8-a729-5d277eb821fc-kube-api-access-k6zf2" (OuterVolumeSpecName: "kube-api-access-k6zf2") pod "bc3e954a-9302-42c8-a729-5d277eb821fc" (UID: "bc3e954a-9302-42c8-a729-5d277eb821fc"). InnerVolumeSpecName "kube-api-access-k6zf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:15:03 crc kubenswrapper[4520]: I0130 07:15:03.455751 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6zf2\" (UniqueName: \"kubernetes.io/projected/bc3e954a-9302-42c8-a729-5d277eb821fc-kube-api-access-k6zf2\") on node \"crc\" DevicePath \"\"" Jan 30 07:15:03 crc kubenswrapper[4520]: I0130 07:15:03.455780 4520 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bc3e954a-9302-42c8-a729-5d277eb821fc-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 07:15:03 crc kubenswrapper[4520]: I0130 07:15:03.455790 4520 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc3e954a-9302-42c8-a729-5d277eb821fc-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 07:15:03 crc kubenswrapper[4520]: I0130 07:15:03.921981 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495955-rkfmw" event={"ID":"bc3e954a-9302-42c8-a729-5d277eb821fc","Type":"ContainerDied","Data":"88822eb438c455d6f1e0d2a5aa0fe04edfd251521684039f87213f53ffdfb366"} Jan 30 07:15:03 crc kubenswrapper[4520]: I0130 07:15:03.922431 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88822eb438c455d6f1e0d2a5aa0fe04edfd251521684039f87213f53ffdfb366" Jan 30 07:15:03 crc kubenswrapper[4520]: I0130 07:15:03.922034 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495955-rkfmw" Jan 30 07:15:16 crc kubenswrapper[4520]: I0130 07:15:16.006156 4520 generic.go:334] "Generic (PLEG): container finished" podID="9160234d-a948-4513-85bc-a3bb4f7a54fc" containerID="de23cd49f55b65d5ff77ff0434a51af2001c4bf277cfee4aaf534c7d0aca4a87" exitCode=0 Jan 30 07:15:16 crc kubenswrapper[4520]: I0130 07:15:16.006229 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" event={"ID":"9160234d-a948-4513-85bc-a3bb4f7a54fc","Type":"ContainerDied","Data":"de23cd49f55b65d5ff77ff0434a51af2001c4bf277cfee4aaf534c7d0aca4a87"} Jan 30 07:15:16 crc kubenswrapper[4520]: I0130 07:15:16.693465 4520 scope.go:117] "RemoveContainer" containerID="3511c403ecc0670dedcbeb455988f781d984e79ff36ca09f0a0274a95f203ca7" Jan 30 07:15:16 crc kubenswrapper[4520]: E0130 07:15:16.693792 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.501433 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.539461 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-telemetry-combined-ca-bundle\") pod \"9160234d-a948-4513-85bc-a3bb4f7a54fc\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.539509 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-nova-combined-ca-bundle\") pod \"9160234d-a948-4513-85bc-a3bb4f7a54fc\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.539622 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98ts5\" (UniqueName: \"kubernetes.io/projected/9160234d-a948-4513-85bc-a3bb4f7a54fc-kube-api-access-98ts5\") pod \"9160234d-a948-4513-85bc-a3bb4f7a54fc\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.539645 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-libvirt-combined-ca-bundle\") pod \"9160234d-a948-4513-85bc-a3bb4f7a54fc\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.539795 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9160234d-a948-4513-85bc-a3bb4f7a54fc-openstack-edpm-ipam-ovn-default-certs-0\") pod \"9160234d-a948-4513-85bc-a3bb4f7a54fc\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.540476 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9160234d-a948-4513-85bc-a3bb4f7a54fc-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"9160234d-a948-4513-85bc-a3bb4f7a54fc\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.540802 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-ovn-combined-ca-bundle\") pod \"9160234d-a948-4513-85bc-a3bb4f7a54fc\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.540890 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-bootstrap-combined-ca-bundle\") pod \"9160234d-a948-4513-85bc-a3bb4f7a54fc\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.540930 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9160234d-a948-4513-85bc-a3bb4f7a54fc-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"9160234d-a948-4513-85bc-a3bb4f7a54fc\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.540953 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-ssh-key-openstack-edpm-ipam\") pod \"9160234d-a948-4513-85bc-a3bb4f7a54fc\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.541007 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9160234d-a948-4513-85bc-a3bb4f7a54fc-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"9160234d-a948-4513-85bc-a3bb4f7a54fc\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.541038 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-inventory\") pod \"9160234d-a948-4513-85bc-a3bb4f7a54fc\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.541060 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-repo-setup-combined-ca-bundle\") pod \"9160234d-a948-4513-85bc-a3bb4f7a54fc\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.541105 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-neutron-metadata-combined-ca-bundle\") pod \"9160234d-a948-4513-85bc-a3bb4f7a54fc\" (UID: \"9160234d-a948-4513-85bc-a3bb4f7a54fc\") " Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.549022 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "9160234d-a948-4513-85bc-a3bb4f7a54fc" (UID: "9160234d-a948-4513-85bc-a3bb4f7a54fc"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.549364 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "9160234d-a948-4513-85bc-a3bb4f7a54fc" (UID: "9160234d-a948-4513-85bc-a3bb4f7a54fc"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.549441 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9160234d-a948-4513-85bc-a3bb4f7a54fc-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "9160234d-a948-4513-85bc-a3bb4f7a54fc" (UID: "9160234d-a948-4513-85bc-a3bb4f7a54fc"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.549884 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9160234d-a948-4513-85bc-a3bb4f7a54fc-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "9160234d-a948-4513-85bc-a3bb4f7a54fc" (UID: "9160234d-a948-4513-85bc-a3bb4f7a54fc"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.549889 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "9160234d-a948-4513-85bc-a3bb4f7a54fc" (UID: "9160234d-a948-4513-85bc-a3bb4f7a54fc"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.549907 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9160234d-a948-4513-85bc-a3bb4f7a54fc-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "9160234d-a948-4513-85bc-a3bb4f7a54fc" (UID: "9160234d-a948-4513-85bc-a3bb4f7a54fc"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.549951 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9160234d-a948-4513-85bc-a3bb4f7a54fc-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "9160234d-a948-4513-85bc-a3bb4f7a54fc" (UID: "9160234d-a948-4513-85bc-a3bb4f7a54fc"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.549975 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9160234d-a948-4513-85bc-a3bb4f7a54fc-kube-api-access-98ts5" (OuterVolumeSpecName: "kube-api-access-98ts5") pod "9160234d-a948-4513-85bc-a3bb4f7a54fc" (UID: "9160234d-a948-4513-85bc-a3bb4f7a54fc"). InnerVolumeSpecName "kube-api-access-98ts5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.549983 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "9160234d-a948-4513-85bc-a3bb4f7a54fc" (UID: "9160234d-a948-4513-85bc-a3bb4f7a54fc"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.549999 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "9160234d-a948-4513-85bc-a3bb4f7a54fc" (UID: "9160234d-a948-4513-85bc-a3bb4f7a54fc"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.550196 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "9160234d-a948-4513-85bc-a3bb4f7a54fc" (UID: "9160234d-a948-4513-85bc-a3bb4f7a54fc"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.558993 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "9160234d-a948-4513-85bc-a3bb4f7a54fc" (UID: "9160234d-a948-4513-85bc-a3bb4f7a54fc"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.568460 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-inventory" (OuterVolumeSpecName: "inventory") pod "9160234d-a948-4513-85bc-a3bb4f7a54fc" (UID: "9160234d-a948-4513-85bc-a3bb4f7a54fc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.568890 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9160234d-a948-4513-85bc-a3bb4f7a54fc" (UID: "9160234d-a948-4513-85bc-a3bb4f7a54fc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.642397 4520 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9160234d-a948-4513-85bc-a3bb4f7a54fc-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.642431 4520 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.642447 4520 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9160234d-a948-4513-85bc-a3bb4f7a54fc-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.642458 4520 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.642468 4520 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.642478 4520 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.642488 4520 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.642498 4520 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.642507 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98ts5\" (UniqueName: \"kubernetes.io/projected/9160234d-a948-4513-85bc-a3bb4f7a54fc-kube-api-access-98ts5\") on node \"crc\" DevicePath \"\"" Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.642551 4520 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.642561 4520 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9160234d-a948-4513-85bc-a3bb4f7a54fc-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.642571 4520 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9160234d-a948-4513-85bc-a3bb4f7a54fc-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.642582 4520 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:15:17 crc kubenswrapper[4520]: I0130 07:15:17.642592 4520 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9160234d-a948-4513-85bc-a3bb4f7a54fc-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:15:18 crc kubenswrapper[4520]: I0130 07:15:18.020527 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" event={"ID":"9160234d-a948-4513-85bc-a3bb4f7a54fc","Type":"ContainerDied","Data":"041eeecf21ad0ca964aa8a272960314bca7157910e9f40f465f14bb6c436e1ba"} Jan 30 07:15:18 crc kubenswrapper[4520]: I0130 07:15:18.020572 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="041eeecf21ad0ca964aa8a272960314bca7157910e9f40f465f14bb6c436e1ba" Jan 30 07:15:18 crc kubenswrapper[4520]: I0130 07:15:18.020579 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lkz2v" Jan 30 07:15:18 crc kubenswrapper[4520]: I0130 07:15:18.106120 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-7l9bq"] Jan 30 07:15:18 crc kubenswrapper[4520]: E0130 07:15:18.112418 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9160234d-a948-4513-85bc-a3bb4f7a54fc" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 30 07:15:18 crc kubenswrapper[4520]: I0130 07:15:18.112451 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="9160234d-a948-4513-85bc-a3bb4f7a54fc" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 30 07:15:18 crc kubenswrapper[4520]: E0130 07:15:18.112493 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc3e954a-9302-42c8-a729-5d277eb821fc" containerName="collect-profiles" Jan 30 07:15:18 crc kubenswrapper[4520]: I0130 07:15:18.112501 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc3e954a-9302-42c8-a729-5d277eb821fc" containerName="collect-profiles" Jan 30 07:15:18 crc kubenswrapper[4520]: I0130 07:15:18.112712 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="9160234d-a948-4513-85bc-a3bb4f7a54fc" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 30 07:15:18 crc kubenswrapper[4520]: I0130 07:15:18.112728 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc3e954a-9302-42c8-a729-5d277eb821fc" containerName="collect-profiles" Jan 30 07:15:18 crc kubenswrapper[4520]: I0130 07:15:18.113317 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7l9bq" Jan 30 07:15:18 crc kubenswrapper[4520]: I0130 07:15:18.113556 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-7l9bq"] Jan 30 07:15:18 crc kubenswrapper[4520]: I0130 07:15:18.115282 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 07:15:18 crc kubenswrapper[4520]: I0130 07:15:18.115444 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 30 07:15:18 crc kubenswrapper[4520]: I0130 07:15:18.115577 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 07:15:18 crc kubenswrapper[4520]: I0130 07:15:18.115695 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 07:15:18 crc kubenswrapper[4520]: I0130 07:15:18.115809 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r7s58" Jan 30 07:15:18 crc kubenswrapper[4520]: I0130 07:15:18.148692 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50e58d51-5ce1-48a7-90c8-6ff95a8119c4-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7l9bq\" (UID: \"50e58d51-5ce1-48a7-90c8-6ff95a8119c4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7l9bq" Jan 30 07:15:18 crc kubenswrapper[4520]: I0130 07:15:18.148771 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/50e58d51-5ce1-48a7-90c8-6ff95a8119c4-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7l9bq\" (UID: \"50e58d51-5ce1-48a7-90c8-6ff95a8119c4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7l9bq" Jan 30 07:15:18 crc kubenswrapper[4520]: I0130 07:15:18.148845 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8n6f\" (UniqueName: \"kubernetes.io/projected/50e58d51-5ce1-48a7-90c8-6ff95a8119c4-kube-api-access-m8n6f\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7l9bq\" (UID: \"50e58d51-5ce1-48a7-90c8-6ff95a8119c4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7l9bq" Jan 30 07:15:18 crc kubenswrapper[4520]: I0130 07:15:18.148874 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50e58d51-5ce1-48a7-90c8-6ff95a8119c4-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7l9bq\" (UID: \"50e58d51-5ce1-48a7-90c8-6ff95a8119c4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7l9bq" Jan 30 07:15:18 crc kubenswrapper[4520]: I0130 07:15:18.148922 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50e58d51-5ce1-48a7-90c8-6ff95a8119c4-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7l9bq\" (UID: \"50e58d51-5ce1-48a7-90c8-6ff95a8119c4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7l9bq" Jan 30 07:15:18 crc kubenswrapper[4520]: I0130 07:15:18.250407 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50e58d51-5ce1-48a7-90c8-6ff95a8119c4-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7l9bq\" (UID: \"50e58d51-5ce1-48a7-90c8-6ff95a8119c4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7l9bq" Jan 30 07:15:18 crc kubenswrapper[4520]: I0130 07:15:18.250528 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/50e58d51-5ce1-48a7-90c8-6ff95a8119c4-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7l9bq\" (UID: \"50e58d51-5ce1-48a7-90c8-6ff95a8119c4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7l9bq" Jan 30 07:15:18 crc kubenswrapper[4520]: I0130 07:15:18.250583 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8n6f\" (UniqueName: \"kubernetes.io/projected/50e58d51-5ce1-48a7-90c8-6ff95a8119c4-kube-api-access-m8n6f\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7l9bq\" (UID: \"50e58d51-5ce1-48a7-90c8-6ff95a8119c4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7l9bq" Jan 30 07:15:18 crc kubenswrapper[4520]: I0130 07:15:18.250616 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50e58d51-5ce1-48a7-90c8-6ff95a8119c4-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7l9bq\" (UID: \"50e58d51-5ce1-48a7-90c8-6ff95a8119c4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7l9bq" Jan 30 07:15:18 crc kubenswrapper[4520]: I0130 07:15:18.250667 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50e58d51-5ce1-48a7-90c8-6ff95a8119c4-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7l9bq\" (UID: \"50e58d51-5ce1-48a7-90c8-6ff95a8119c4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7l9bq" Jan 30 07:15:18 crc kubenswrapper[4520]: I0130 07:15:18.251684 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/50e58d51-5ce1-48a7-90c8-6ff95a8119c4-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7l9bq\" (UID: \"50e58d51-5ce1-48a7-90c8-6ff95a8119c4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7l9bq" Jan 30 07:15:18 crc kubenswrapper[4520]: I0130 07:15:18.256586 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50e58d51-5ce1-48a7-90c8-6ff95a8119c4-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7l9bq\" (UID: \"50e58d51-5ce1-48a7-90c8-6ff95a8119c4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7l9bq" Jan 30 07:15:18 crc kubenswrapper[4520]: I0130 07:15:18.259182 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50e58d51-5ce1-48a7-90c8-6ff95a8119c4-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7l9bq\" (UID: \"50e58d51-5ce1-48a7-90c8-6ff95a8119c4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7l9bq" Jan 30 07:15:18 crc kubenswrapper[4520]: I0130 07:15:18.259206 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50e58d51-5ce1-48a7-90c8-6ff95a8119c4-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7l9bq\" (UID: \"50e58d51-5ce1-48a7-90c8-6ff95a8119c4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7l9bq" Jan 30 07:15:18 crc kubenswrapper[4520]: I0130 07:15:18.267665 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8n6f\" (UniqueName: \"kubernetes.io/projected/50e58d51-5ce1-48a7-90c8-6ff95a8119c4-kube-api-access-m8n6f\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7l9bq\" (UID: \"50e58d51-5ce1-48a7-90c8-6ff95a8119c4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7l9bq" Jan 30 07:15:18 crc kubenswrapper[4520]: I0130 07:15:18.434954 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7l9bq" Jan 30 07:15:18 crc kubenswrapper[4520]: I0130 07:15:18.886416 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-7l9bq"] Jan 30 07:15:19 crc kubenswrapper[4520]: I0130 07:15:19.029118 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7l9bq" event={"ID":"50e58d51-5ce1-48a7-90c8-6ff95a8119c4","Type":"ContainerStarted","Data":"660b7b5ba86185c3f15cfcd9f040b06c4e1d91545a28df08f1372888a18a5689"} Jan 30 07:15:20 crc kubenswrapper[4520]: I0130 07:15:20.037048 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7l9bq" event={"ID":"50e58d51-5ce1-48a7-90c8-6ff95a8119c4","Type":"ContainerStarted","Data":"232d3b566f90700be0de3bcc24b00d8c8efa73d89ecc123760c221e7d9c19fdf"} Jan 30 07:15:30 crc kubenswrapper[4520]: I0130 07:15:30.686090 4520 scope.go:117] "RemoveContainer" containerID="3511c403ecc0670dedcbeb455988f781d984e79ff36ca09f0a0274a95f203ca7" Jan 30 07:15:30 crc kubenswrapper[4520]: E0130 07:15:30.686837 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:15:41 crc kubenswrapper[4520]: I0130 07:15:41.687123 4520 scope.go:117] "RemoveContainer" containerID="3511c403ecc0670dedcbeb455988f781d984e79ff36ca09f0a0274a95f203ca7" Jan 30 07:15:41 crc kubenswrapper[4520]: E0130 07:15:41.687847 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:15:55 crc kubenswrapper[4520]: I0130 07:15:55.687275 4520 scope.go:117] "RemoveContainer" containerID="3511c403ecc0670dedcbeb455988f781d984e79ff36ca09f0a0274a95f203ca7" Jan 30 07:15:55 crc kubenswrapper[4520]: E0130 07:15:55.688396 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:16:09 crc kubenswrapper[4520]: I0130 07:16:09.441609 4520 generic.go:334] "Generic (PLEG): container finished" podID="50e58d51-5ce1-48a7-90c8-6ff95a8119c4" containerID="232d3b566f90700be0de3bcc24b00d8c8efa73d89ecc123760c221e7d9c19fdf" exitCode=0 Jan 30 07:16:09 crc kubenswrapper[4520]: I0130 07:16:09.441676 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7l9bq" event={"ID":"50e58d51-5ce1-48a7-90c8-6ff95a8119c4","Type":"ContainerDied","Data":"232d3b566f90700be0de3bcc24b00d8c8efa73d89ecc123760c221e7d9c19fdf"} Jan 30 07:16:10 crc kubenswrapper[4520]: I0130 07:16:10.686101 4520 scope.go:117] "RemoveContainer" containerID="3511c403ecc0670dedcbeb455988f781d984e79ff36ca09f0a0274a95f203ca7" Jan 30 07:16:10 crc kubenswrapper[4520]: E0130 07:16:10.686645 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:16:10 crc kubenswrapper[4520]: I0130 07:16:10.774688 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7l9bq" Jan 30 07:16:10 crc kubenswrapper[4520]: I0130 07:16:10.871685 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50e58d51-5ce1-48a7-90c8-6ff95a8119c4-inventory\") pod \"50e58d51-5ce1-48a7-90c8-6ff95a8119c4\" (UID: \"50e58d51-5ce1-48a7-90c8-6ff95a8119c4\") " Jan 30 07:16:10 crc kubenswrapper[4520]: I0130 07:16:10.871909 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/50e58d51-5ce1-48a7-90c8-6ff95a8119c4-ovncontroller-config-0\") pod \"50e58d51-5ce1-48a7-90c8-6ff95a8119c4\" (UID: \"50e58d51-5ce1-48a7-90c8-6ff95a8119c4\") " Jan 30 07:16:10 crc kubenswrapper[4520]: I0130 07:16:10.871974 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50e58d51-5ce1-48a7-90c8-6ff95a8119c4-ssh-key-openstack-edpm-ipam\") pod \"50e58d51-5ce1-48a7-90c8-6ff95a8119c4\" (UID: \"50e58d51-5ce1-48a7-90c8-6ff95a8119c4\") " Jan 30 07:16:10 crc kubenswrapper[4520]: I0130 07:16:10.872056 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50e58d51-5ce1-48a7-90c8-6ff95a8119c4-ovn-combined-ca-bundle\") pod \"50e58d51-5ce1-48a7-90c8-6ff95a8119c4\" (UID: \"50e58d51-5ce1-48a7-90c8-6ff95a8119c4\") " Jan 30 07:16:10 crc kubenswrapper[4520]: I0130 07:16:10.872178 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8n6f\" (UniqueName: \"kubernetes.io/projected/50e58d51-5ce1-48a7-90c8-6ff95a8119c4-kube-api-access-m8n6f\") pod \"50e58d51-5ce1-48a7-90c8-6ff95a8119c4\" (UID: \"50e58d51-5ce1-48a7-90c8-6ff95a8119c4\") " Jan 30 07:16:10 crc kubenswrapper[4520]: I0130 07:16:10.877095 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50e58d51-5ce1-48a7-90c8-6ff95a8119c4-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "50e58d51-5ce1-48a7-90c8-6ff95a8119c4" (UID: "50e58d51-5ce1-48a7-90c8-6ff95a8119c4"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:16:10 crc kubenswrapper[4520]: I0130 07:16:10.877417 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50e58d51-5ce1-48a7-90c8-6ff95a8119c4-kube-api-access-m8n6f" (OuterVolumeSpecName: "kube-api-access-m8n6f") pod "50e58d51-5ce1-48a7-90c8-6ff95a8119c4" (UID: "50e58d51-5ce1-48a7-90c8-6ff95a8119c4"). InnerVolumeSpecName "kube-api-access-m8n6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:16:10 crc kubenswrapper[4520]: I0130 07:16:10.902403 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50e58d51-5ce1-48a7-90c8-6ff95a8119c4-inventory" (OuterVolumeSpecName: "inventory") pod "50e58d51-5ce1-48a7-90c8-6ff95a8119c4" (UID: "50e58d51-5ce1-48a7-90c8-6ff95a8119c4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:16:10 crc kubenswrapper[4520]: I0130 07:16:10.906822 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50e58d51-5ce1-48a7-90c8-6ff95a8119c4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "50e58d51-5ce1-48a7-90c8-6ff95a8119c4" (UID: "50e58d51-5ce1-48a7-90c8-6ff95a8119c4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:16:10 crc kubenswrapper[4520]: I0130 07:16:10.908294 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50e58d51-5ce1-48a7-90c8-6ff95a8119c4-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "50e58d51-5ce1-48a7-90c8-6ff95a8119c4" (UID: "50e58d51-5ce1-48a7-90c8-6ff95a8119c4"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:16:10 crc kubenswrapper[4520]: I0130 07:16:10.975650 4520 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50e58d51-5ce1-48a7-90c8-6ff95a8119c4-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:16:10 crc kubenswrapper[4520]: I0130 07:16:10.975696 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8n6f\" (UniqueName: \"kubernetes.io/projected/50e58d51-5ce1-48a7-90c8-6ff95a8119c4-kube-api-access-m8n6f\") on node \"crc\" DevicePath \"\"" Jan 30 07:16:10 crc kubenswrapper[4520]: I0130 07:16:10.975708 4520 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50e58d51-5ce1-48a7-90c8-6ff95a8119c4-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 07:16:10 crc kubenswrapper[4520]: I0130 07:16:10.975722 4520 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/50e58d51-5ce1-48a7-90c8-6ff95a8119c4-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 07:16:10 crc kubenswrapper[4520]: I0130 07:16:10.975733 4520 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50e58d51-5ce1-48a7-90c8-6ff95a8119c4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 07:16:11 crc kubenswrapper[4520]: I0130 07:16:11.464832 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7l9bq" event={"ID":"50e58d51-5ce1-48a7-90c8-6ff95a8119c4","Type":"ContainerDied","Data":"660b7b5ba86185c3f15cfcd9f040b06c4e1d91545a28df08f1372888a18a5689"} Jan 30 07:16:11 crc kubenswrapper[4520]: I0130 07:16:11.464885 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="660b7b5ba86185c3f15cfcd9f040b06c4e1d91545a28df08f1372888a18a5689" Jan 30 07:16:11 crc kubenswrapper[4520]: I0130 07:16:11.464964 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7l9bq" Jan 30 07:16:11 crc kubenswrapper[4520]: I0130 07:16:11.537809 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp"] Jan 30 07:16:11 crc kubenswrapper[4520]: E0130 07:16:11.538156 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50e58d51-5ce1-48a7-90c8-6ff95a8119c4" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 30 07:16:11 crc kubenswrapper[4520]: I0130 07:16:11.538175 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="50e58d51-5ce1-48a7-90c8-6ff95a8119c4" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 30 07:16:11 crc kubenswrapper[4520]: I0130 07:16:11.538373 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="50e58d51-5ce1-48a7-90c8-6ff95a8119c4" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 30 07:16:11 crc kubenswrapper[4520]: I0130 07:16:11.538927 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp" Jan 30 07:16:11 crc kubenswrapper[4520]: I0130 07:16:11.541217 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 07:16:11 crc kubenswrapper[4520]: I0130 07:16:11.541420 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 07:16:11 crc kubenswrapper[4520]: I0130 07:16:11.541650 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 30 07:16:11 crc kubenswrapper[4520]: I0130 07:16:11.541664 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 07:16:11 crc kubenswrapper[4520]: I0130 07:16:11.541796 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r7s58" Jan 30 07:16:11 crc kubenswrapper[4520]: I0130 07:16:11.543355 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 30 07:16:11 crc kubenswrapper[4520]: I0130 07:16:11.553381 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp"] Jan 30 07:16:11 crc kubenswrapper[4520]: I0130 07:16:11.593600 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4e162cbc-a5fb-4d33-baaf-22d191876af7-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp\" (UID: \"4e162cbc-a5fb-4d33-baaf-22d191876af7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp" Jan 30 07:16:11 crc kubenswrapper[4520]: I0130 07:16:11.593667 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4e162cbc-a5fb-4d33-baaf-22d191876af7-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp\" (UID: \"4e162cbc-a5fb-4d33-baaf-22d191876af7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp" Jan 30 07:16:11 crc kubenswrapper[4520]: I0130 07:16:11.593708 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7z2j\" (UniqueName: \"kubernetes.io/projected/4e162cbc-a5fb-4d33-baaf-22d191876af7-kube-api-access-z7z2j\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp\" (UID: \"4e162cbc-a5fb-4d33-baaf-22d191876af7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp" Jan 30 07:16:11 crc kubenswrapper[4520]: I0130 07:16:11.593796 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e162cbc-a5fb-4d33-baaf-22d191876af7-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp\" (UID: \"4e162cbc-a5fb-4d33-baaf-22d191876af7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp" Jan 30 07:16:11 crc kubenswrapper[4520]: I0130 07:16:11.593858 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4e162cbc-a5fb-4d33-baaf-22d191876af7-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp\" (UID: \"4e162cbc-a5fb-4d33-baaf-22d191876af7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp" Jan 30 07:16:11 crc kubenswrapper[4520]: I0130 07:16:11.593898 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e162cbc-a5fb-4d33-baaf-22d191876af7-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp\" (UID: \"4e162cbc-a5fb-4d33-baaf-22d191876af7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp" Jan 30 07:16:11 crc kubenswrapper[4520]: I0130 07:16:11.694989 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4e162cbc-a5fb-4d33-baaf-22d191876af7-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp\" (UID: \"4e162cbc-a5fb-4d33-baaf-22d191876af7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp" Jan 30 07:16:11 crc kubenswrapper[4520]: I0130 07:16:11.695045 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4e162cbc-a5fb-4d33-baaf-22d191876af7-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp\" (UID: \"4e162cbc-a5fb-4d33-baaf-22d191876af7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp" Jan 30 07:16:11 crc kubenswrapper[4520]: I0130 07:16:11.695082 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7z2j\" (UniqueName: \"kubernetes.io/projected/4e162cbc-a5fb-4d33-baaf-22d191876af7-kube-api-access-z7z2j\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp\" (UID: \"4e162cbc-a5fb-4d33-baaf-22d191876af7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp" Jan 30 07:16:11 crc kubenswrapper[4520]: I0130 07:16:11.695158 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e162cbc-a5fb-4d33-baaf-22d191876af7-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp\" (UID: \"4e162cbc-a5fb-4d33-baaf-22d191876af7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp" Jan 30 07:16:11 crc kubenswrapper[4520]: I0130 07:16:11.695205 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4e162cbc-a5fb-4d33-baaf-22d191876af7-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp\" (UID: \"4e162cbc-a5fb-4d33-baaf-22d191876af7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp" Jan 30 07:16:11 crc kubenswrapper[4520]: I0130 07:16:11.695239 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e162cbc-a5fb-4d33-baaf-22d191876af7-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp\" (UID: \"4e162cbc-a5fb-4d33-baaf-22d191876af7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp" Jan 30 07:16:11 crc kubenswrapper[4520]: I0130 07:16:11.702104 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e162cbc-a5fb-4d33-baaf-22d191876af7-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp\" (UID: \"4e162cbc-a5fb-4d33-baaf-22d191876af7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp" Jan 30 07:16:11 crc kubenswrapper[4520]: I0130 07:16:11.702482 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4e162cbc-a5fb-4d33-baaf-22d191876af7-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp\" (UID: \"4e162cbc-a5fb-4d33-baaf-22d191876af7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp" Jan 30 07:16:11 crc kubenswrapper[4520]: I0130 07:16:11.703192 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e162cbc-a5fb-4d33-baaf-22d191876af7-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp\" (UID: \"4e162cbc-a5fb-4d33-baaf-22d191876af7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp" Jan 30 07:16:11 crc kubenswrapper[4520]: I0130 07:16:11.705175 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4e162cbc-a5fb-4d33-baaf-22d191876af7-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp\" (UID: \"4e162cbc-a5fb-4d33-baaf-22d191876af7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp" Jan 30 07:16:11 crc kubenswrapper[4520]: I0130 07:16:11.706373 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4e162cbc-a5fb-4d33-baaf-22d191876af7-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp\" (UID: \"4e162cbc-a5fb-4d33-baaf-22d191876af7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp" Jan 30 07:16:11 crc kubenswrapper[4520]: I0130 07:16:11.713423 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7z2j\" (UniqueName: \"kubernetes.io/projected/4e162cbc-a5fb-4d33-baaf-22d191876af7-kube-api-access-z7z2j\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp\" (UID: \"4e162cbc-a5fb-4d33-baaf-22d191876af7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp" Jan 30 07:16:11 crc kubenswrapper[4520]: I0130 07:16:11.864098 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp" Jan 30 07:16:12 crc kubenswrapper[4520]: I0130 07:16:12.334927 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp"] Jan 30 07:16:12 crc kubenswrapper[4520]: I0130 07:16:12.478527 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp" event={"ID":"4e162cbc-a5fb-4d33-baaf-22d191876af7","Type":"ContainerStarted","Data":"ff1516dfd63fef863c7551ba805a01680a936b18c52c5b04b95694bbb9eef778"} Jan 30 07:16:13 crc kubenswrapper[4520]: I0130 07:16:13.485563 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp" event={"ID":"4e162cbc-a5fb-4d33-baaf-22d191876af7","Type":"ContainerStarted","Data":"65cb8c75405ab9dd34cb00624723e05e30dc1748ab6919cd52ab825f4b9ca960"} Jan 30 07:16:23 crc kubenswrapper[4520]: I0130 07:16:23.685128 4520 scope.go:117] "RemoveContainer" containerID="3511c403ecc0670dedcbeb455988f781d984e79ff36ca09f0a0274a95f203ca7" Jan 30 07:16:23 crc kubenswrapper[4520]: E0130 07:16:23.685957 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:16:34 crc kubenswrapper[4520]: I0130 07:16:34.685805 4520 scope.go:117] "RemoveContainer" containerID="3511c403ecc0670dedcbeb455988f781d984e79ff36ca09f0a0274a95f203ca7" Jan 30 07:16:34 crc kubenswrapper[4520]: E0130 07:16:34.686565 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:16:45 crc kubenswrapper[4520]: I0130 07:16:45.685910 4520 scope.go:117] "RemoveContainer" containerID="3511c403ecc0670dedcbeb455988f781d984e79ff36ca09f0a0274a95f203ca7" Jan 30 07:16:45 crc kubenswrapper[4520]: E0130 07:16:45.686839 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:16:52 crc kubenswrapper[4520]: I0130 07:16:52.784672 4520 generic.go:334] "Generic (PLEG): container finished" podID="4e162cbc-a5fb-4d33-baaf-22d191876af7" containerID="65cb8c75405ab9dd34cb00624723e05e30dc1748ab6919cd52ab825f4b9ca960" exitCode=0 Jan 30 07:16:52 crc kubenswrapper[4520]: I0130 07:16:52.784767 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp" event={"ID":"4e162cbc-a5fb-4d33-baaf-22d191876af7","Type":"ContainerDied","Data":"65cb8c75405ab9dd34cb00624723e05e30dc1748ab6919cd52ab825f4b9ca960"} Jan 30 07:16:54 crc kubenswrapper[4520]: I0130 07:16:54.357856 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp" Jan 30 07:16:54 crc kubenswrapper[4520]: I0130 07:16:54.549127 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7z2j\" (UniqueName: \"kubernetes.io/projected/4e162cbc-a5fb-4d33-baaf-22d191876af7-kube-api-access-z7z2j\") pod \"4e162cbc-a5fb-4d33-baaf-22d191876af7\" (UID: \"4e162cbc-a5fb-4d33-baaf-22d191876af7\") " Jan 30 07:16:54 crc kubenswrapper[4520]: I0130 07:16:54.549267 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4e162cbc-a5fb-4d33-baaf-22d191876af7-neutron-ovn-metadata-agent-neutron-config-0\") pod \"4e162cbc-a5fb-4d33-baaf-22d191876af7\" (UID: \"4e162cbc-a5fb-4d33-baaf-22d191876af7\") " Jan 30 07:16:54 crc kubenswrapper[4520]: I0130 07:16:54.549549 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e162cbc-a5fb-4d33-baaf-22d191876af7-inventory\") pod \"4e162cbc-a5fb-4d33-baaf-22d191876af7\" (UID: \"4e162cbc-a5fb-4d33-baaf-22d191876af7\") " Jan 30 07:16:54 crc kubenswrapper[4520]: I0130 07:16:54.549580 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4e162cbc-a5fb-4d33-baaf-22d191876af7-ssh-key-openstack-edpm-ipam\") pod \"4e162cbc-a5fb-4d33-baaf-22d191876af7\" (UID: \"4e162cbc-a5fb-4d33-baaf-22d191876af7\") " Jan 30 07:16:54 crc kubenswrapper[4520]: I0130 07:16:54.549652 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4e162cbc-a5fb-4d33-baaf-22d191876af7-nova-metadata-neutron-config-0\") pod \"4e162cbc-a5fb-4d33-baaf-22d191876af7\" (UID: \"4e162cbc-a5fb-4d33-baaf-22d191876af7\") " Jan 30 07:16:54 crc kubenswrapper[4520]: I0130 07:16:54.549686 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e162cbc-a5fb-4d33-baaf-22d191876af7-neutron-metadata-combined-ca-bundle\") pod \"4e162cbc-a5fb-4d33-baaf-22d191876af7\" (UID: \"4e162cbc-a5fb-4d33-baaf-22d191876af7\") " Jan 30 07:16:54 crc kubenswrapper[4520]: I0130 07:16:54.554561 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e162cbc-a5fb-4d33-baaf-22d191876af7-kube-api-access-z7z2j" (OuterVolumeSpecName: "kube-api-access-z7z2j") pod "4e162cbc-a5fb-4d33-baaf-22d191876af7" (UID: "4e162cbc-a5fb-4d33-baaf-22d191876af7"). InnerVolumeSpecName "kube-api-access-z7z2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:16:54 crc kubenswrapper[4520]: I0130 07:16:54.555711 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e162cbc-a5fb-4d33-baaf-22d191876af7-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "4e162cbc-a5fb-4d33-baaf-22d191876af7" (UID: "4e162cbc-a5fb-4d33-baaf-22d191876af7"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:16:54 crc kubenswrapper[4520]: I0130 07:16:54.572994 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e162cbc-a5fb-4d33-baaf-22d191876af7-inventory" (OuterVolumeSpecName: "inventory") pod "4e162cbc-a5fb-4d33-baaf-22d191876af7" (UID: "4e162cbc-a5fb-4d33-baaf-22d191876af7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:16:54 crc kubenswrapper[4520]: I0130 07:16:54.573319 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e162cbc-a5fb-4d33-baaf-22d191876af7-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "4e162cbc-a5fb-4d33-baaf-22d191876af7" (UID: "4e162cbc-a5fb-4d33-baaf-22d191876af7"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:16:54 crc kubenswrapper[4520]: I0130 07:16:54.574344 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e162cbc-a5fb-4d33-baaf-22d191876af7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4e162cbc-a5fb-4d33-baaf-22d191876af7" (UID: "4e162cbc-a5fb-4d33-baaf-22d191876af7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:16:54 crc kubenswrapper[4520]: I0130 07:16:54.575081 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e162cbc-a5fb-4d33-baaf-22d191876af7-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "4e162cbc-a5fb-4d33-baaf-22d191876af7" (UID: "4e162cbc-a5fb-4d33-baaf-22d191876af7"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:16:54 crc kubenswrapper[4520]: I0130 07:16:54.652778 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7z2j\" (UniqueName: \"kubernetes.io/projected/4e162cbc-a5fb-4d33-baaf-22d191876af7-kube-api-access-z7z2j\") on node \"crc\" DevicePath \"\"" Jan 30 07:16:54 crc kubenswrapper[4520]: I0130 07:16:54.652825 4520 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4e162cbc-a5fb-4d33-baaf-22d191876af7-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 07:16:54 crc kubenswrapper[4520]: I0130 07:16:54.652843 4520 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e162cbc-a5fb-4d33-baaf-22d191876af7-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 07:16:54 crc kubenswrapper[4520]: I0130 07:16:54.652858 4520 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4e162cbc-a5fb-4d33-baaf-22d191876af7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 07:16:54 crc kubenswrapper[4520]: I0130 07:16:54.652870 4520 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4e162cbc-a5fb-4d33-baaf-22d191876af7-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 07:16:54 crc kubenswrapper[4520]: I0130 07:16:54.652880 4520 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e162cbc-a5fb-4d33-baaf-22d191876af7-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:16:54 crc kubenswrapper[4520]: I0130 07:16:54.803741 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp" event={"ID":"4e162cbc-a5fb-4d33-baaf-22d191876af7","Type":"ContainerDied","Data":"ff1516dfd63fef863c7551ba805a01680a936b18c52c5b04b95694bbb9eef778"} Jan 30 07:16:54 crc kubenswrapper[4520]: I0130 07:16:54.803814 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff1516dfd63fef863c7551ba805a01680a936b18c52c5b04b95694bbb9eef778" Jan 30 07:16:54 crc kubenswrapper[4520]: I0130 07:16:54.803837 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2k6bp" Jan 30 07:16:54 crc kubenswrapper[4520]: I0130 07:16:54.892222 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-55q7r"] Jan 30 07:16:54 crc kubenswrapper[4520]: E0130 07:16:54.892673 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e162cbc-a5fb-4d33-baaf-22d191876af7" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 30 07:16:54 crc kubenswrapper[4520]: I0130 07:16:54.892722 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e162cbc-a5fb-4d33-baaf-22d191876af7" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 30 07:16:54 crc kubenswrapper[4520]: I0130 07:16:54.892929 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e162cbc-a5fb-4d33-baaf-22d191876af7" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 30 07:16:54 crc kubenswrapper[4520]: I0130 07:16:54.893593 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-55q7r" Jan 30 07:16:54 crc kubenswrapper[4520]: I0130 07:16:54.899921 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 30 07:16:54 crc kubenswrapper[4520]: I0130 07:16:54.900129 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 07:16:54 crc kubenswrapper[4520]: I0130 07:16:54.900263 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 07:16:54 crc kubenswrapper[4520]: I0130 07:16:54.900382 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r7s58" Jan 30 07:16:54 crc kubenswrapper[4520]: I0130 07:16:54.901061 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 07:16:54 crc kubenswrapper[4520]: I0130 07:16:54.908448 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-55q7r"] Jan 30 07:16:54 crc kubenswrapper[4520]: I0130 07:16:54.958434 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9102a617-91a2-4170-a0f7-3c34f1d8d0ce-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-55q7r\" (UID: \"9102a617-91a2-4170-a0f7-3c34f1d8d0ce\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-55q7r" Jan 30 07:16:54 crc kubenswrapper[4520]: I0130 07:16:54.958638 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9102a617-91a2-4170-a0f7-3c34f1d8d0ce-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-55q7r\" (UID: \"9102a617-91a2-4170-a0f7-3c34f1d8d0ce\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-55q7r" Jan 30 07:16:54 crc kubenswrapper[4520]: I0130 07:16:54.958909 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9102a617-91a2-4170-a0f7-3c34f1d8d0ce-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-55q7r\" (UID: \"9102a617-91a2-4170-a0f7-3c34f1d8d0ce\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-55q7r" Jan 30 07:16:54 crc kubenswrapper[4520]: I0130 07:16:54.958956 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn5lx\" (UniqueName: \"kubernetes.io/projected/9102a617-91a2-4170-a0f7-3c34f1d8d0ce-kube-api-access-xn5lx\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-55q7r\" (UID: \"9102a617-91a2-4170-a0f7-3c34f1d8d0ce\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-55q7r" Jan 30 07:16:54 crc kubenswrapper[4520]: I0130 07:16:54.959118 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9102a617-91a2-4170-a0f7-3c34f1d8d0ce-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-55q7r\" (UID: \"9102a617-91a2-4170-a0f7-3c34f1d8d0ce\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-55q7r" Jan 30 07:16:55 crc kubenswrapper[4520]: I0130 07:16:55.061585 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9102a617-91a2-4170-a0f7-3c34f1d8d0ce-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-55q7r\" (UID: \"9102a617-91a2-4170-a0f7-3c34f1d8d0ce\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-55q7r" Jan 30 07:16:55 crc kubenswrapper[4520]: I0130 07:16:55.061957 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9102a617-91a2-4170-a0f7-3c34f1d8d0ce-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-55q7r\" (UID: \"9102a617-91a2-4170-a0f7-3c34f1d8d0ce\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-55q7r" Jan 30 07:16:55 crc kubenswrapper[4520]: I0130 07:16:55.062083 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9102a617-91a2-4170-a0f7-3c34f1d8d0ce-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-55q7r\" (UID: \"9102a617-91a2-4170-a0f7-3c34f1d8d0ce\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-55q7r" Jan 30 07:16:55 crc kubenswrapper[4520]: I0130 07:16:55.062249 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9102a617-91a2-4170-a0f7-3c34f1d8d0ce-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-55q7r\" (UID: \"9102a617-91a2-4170-a0f7-3c34f1d8d0ce\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-55q7r" Jan 30 07:16:55 crc kubenswrapper[4520]: I0130 07:16:55.062335 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn5lx\" (UniqueName: \"kubernetes.io/projected/9102a617-91a2-4170-a0f7-3c34f1d8d0ce-kube-api-access-xn5lx\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-55q7r\" (UID: \"9102a617-91a2-4170-a0f7-3c34f1d8d0ce\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-55q7r" Jan 30 07:16:55 crc kubenswrapper[4520]: I0130 07:16:55.070864 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9102a617-91a2-4170-a0f7-3c34f1d8d0ce-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-55q7r\" (UID: \"9102a617-91a2-4170-a0f7-3c34f1d8d0ce\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-55q7r" Jan 30 07:16:55 crc kubenswrapper[4520]: I0130 07:16:55.070973 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9102a617-91a2-4170-a0f7-3c34f1d8d0ce-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-55q7r\" (UID: \"9102a617-91a2-4170-a0f7-3c34f1d8d0ce\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-55q7r" Jan 30 07:16:55 crc kubenswrapper[4520]: I0130 07:16:55.072188 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9102a617-91a2-4170-a0f7-3c34f1d8d0ce-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-55q7r\" (UID: \"9102a617-91a2-4170-a0f7-3c34f1d8d0ce\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-55q7r" Jan 30 07:16:55 crc kubenswrapper[4520]: I0130 07:16:55.072761 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9102a617-91a2-4170-a0f7-3c34f1d8d0ce-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-55q7r\" (UID: \"9102a617-91a2-4170-a0f7-3c34f1d8d0ce\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-55q7r" Jan 30 07:16:55 crc kubenswrapper[4520]: I0130 07:16:55.083340 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn5lx\" (UniqueName: \"kubernetes.io/projected/9102a617-91a2-4170-a0f7-3c34f1d8d0ce-kube-api-access-xn5lx\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-55q7r\" (UID: \"9102a617-91a2-4170-a0f7-3c34f1d8d0ce\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-55q7r" Jan 30 07:16:55 crc kubenswrapper[4520]: I0130 07:16:55.216067 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-55q7r" Jan 30 07:16:55 crc kubenswrapper[4520]: I0130 07:16:55.726956 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-55q7r"] Jan 30 07:16:55 crc kubenswrapper[4520]: I0130 07:16:55.811307 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-55q7r" event={"ID":"9102a617-91a2-4170-a0f7-3c34f1d8d0ce","Type":"ContainerStarted","Data":"ecedb49bbe99e2038c90be012a21d04e854e0acc9810315174e7c30047a41f4a"} Jan 30 07:16:56 crc kubenswrapper[4520]: I0130 07:16:56.692333 4520 scope.go:117] "RemoveContainer" containerID="3511c403ecc0670dedcbeb455988f781d984e79ff36ca09f0a0274a95f203ca7" Jan 30 07:16:56 crc kubenswrapper[4520]: E0130 07:16:56.693864 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:16:57 crc kubenswrapper[4520]: I0130 07:16:57.838534 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-55q7r" event={"ID":"9102a617-91a2-4170-a0f7-3c34f1d8d0ce","Type":"ContainerStarted","Data":"1734f3a893f52271e4f25d0a7877fbc0673eead09c408d9f41ad900cbd652ac3"} Jan 30 07:16:57 crc kubenswrapper[4520]: I0130 07:16:57.862740 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-55q7r" podStartSLOduration=2.952681207 podStartE2EDuration="3.862720334s" podCreationTimestamp="2026-01-30 07:16:54 +0000 UTC" firstStartedPulling="2026-01-30 07:16:55.743936826 +0000 UTC m=+1929.372289008" lastFinishedPulling="2026-01-30 07:16:56.653975954 +0000 UTC m=+1930.282328135" observedRunningTime="2026-01-30 07:16:57.854021517 +0000 UTC m=+1931.482373699" watchObservedRunningTime="2026-01-30 07:16:57.862720334 +0000 UTC m=+1931.491072515" Jan 30 07:17:11 crc kubenswrapper[4520]: I0130 07:17:11.685850 4520 scope.go:117] "RemoveContainer" containerID="3511c403ecc0670dedcbeb455988f781d984e79ff36ca09f0a0274a95f203ca7" Jan 30 07:17:11 crc kubenswrapper[4520]: E0130 07:17:11.686918 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:17:26 crc kubenswrapper[4520]: I0130 07:17:26.690090 4520 scope.go:117] "RemoveContainer" containerID="3511c403ecc0670dedcbeb455988f781d984e79ff36ca09f0a0274a95f203ca7" Jan 30 07:17:26 crc kubenswrapper[4520]: E0130 07:17:26.690923 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:17:38 crc kubenswrapper[4520]: I0130 07:17:38.686322 4520 scope.go:117] "RemoveContainer" containerID="3511c403ecc0670dedcbeb455988f781d984e79ff36ca09f0a0274a95f203ca7" Jan 30 07:17:38 crc kubenswrapper[4520]: E0130 07:17:38.687510 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:17:49 crc kubenswrapper[4520]: I0130 07:17:49.685951 4520 scope.go:117] "RemoveContainer" containerID="3511c403ecc0670dedcbeb455988f781d984e79ff36ca09f0a0274a95f203ca7" Jan 30 07:17:49 crc kubenswrapper[4520]: E0130 07:17:49.686754 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:18:01 crc kubenswrapper[4520]: I0130 07:18:01.685427 4520 scope.go:117] "RemoveContainer" containerID="3511c403ecc0670dedcbeb455988f781d984e79ff36ca09f0a0274a95f203ca7" Jan 30 07:18:02 crc kubenswrapper[4520]: I0130 07:18:02.376806 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerStarted","Data":"6f9d3d41b0a37515cd60005bd2f7590ed422a66a445f98c088e023f788133e52"} Jan 30 07:20:27 crc kubenswrapper[4520]: I0130 07:20:27.332706 4520 generic.go:334] "Generic (PLEG): container finished" podID="9102a617-91a2-4170-a0f7-3c34f1d8d0ce" containerID="1734f3a893f52271e4f25d0a7877fbc0673eead09c408d9f41ad900cbd652ac3" exitCode=0 Jan 30 07:20:27 crc kubenswrapper[4520]: I0130 07:20:27.333076 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-55q7r" event={"ID":"9102a617-91a2-4170-a0f7-3c34f1d8d0ce","Type":"ContainerDied","Data":"1734f3a893f52271e4f25d0a7877fbc0673eead09c408d9f41ad900cbd652ac3"} Jan 30 07:20:27 crc kubenswrapper[4520]: I0130 07:20:27.794124 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 07:20:27 crc kubenswrapper[4520]: I0130 07:20:27.794357 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 07:20:28 crc kubenswrapper[4520]: I0130 07:20:28.692363 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-55q7r" Jan 30 07:20:28 crc kubenswrapper[4520]: I0130 07:20:28.870501 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9102a617-91a2-4170-a0f7-3c34f1d8d0ce-libvirt-combined-ca-bundle\") pod \"9102a617-91a2-4170-a0f7-3c34f1d8d0ce\" (UID: \"9102a617-91a2-4170-a0f7-3c34f1d8d0ce\") " Jan 30 07:20:28 crc kubenswrapper[4520]: I0130 07:20:28.870632 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9102a617-91a2-4170-a0f7-3c34f1d8d0ce-libvirt-secret-0\") pod \"9102a617-91a2-4170-a0f7-3c34f1d8d0ce\" (UID: \"9102a617-91a2-4170-a0f7-3c34f1d8d0ce\") " Jan 30 07:20:28 crc kubenswrapper[4520]: I0130 07:20:28.870699 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9102a617-91a2-4170-a0f7-3c34f1d8d0ce-ssh-key-openstack-edpm-ipam\") pod \"9102a617-91a2-4170-a0f7-3c34f1d8d0ce\" (UID: \"9102a617-91a2-4170-a0f7-3c34f1d8d0ce\") " Jan 30 07:20:28 crc kubenswrapper[4520]: I0130 07:20:28.870720 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9102a617-91a2-4170-a0f7-3c34f1d8d0ce-inventory\") pod \"9102a617-91a2-4170-a0f7-3c34f1d8d0ce\" (UID: \"9102a617-91a2-4170-a0f7-3c34f1d8d0ce\") " Jan 30 07:20:28 crc kubenswrapper[4520]: I0130 07:20:28.870860 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xn5lx\" (UniqueName: \"kubernetes.io/projected/9102a617-91a2-4170-a0f7-3c34f1d8d0ce-kube-api-access-xn5lx\") pod \"9102a617-91a2-4170-a0f7-3c34f1d8d0ce\" (UID: \"9102a617-91a2-4170-a0f7-3c34f1d8d0ce\") " Jan 30 07:20:28 crc kubenswrapper[4520]: I0130 07:20:28.879844 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9102a617-91a2-4170-a0f7-3c34f1d8d0ce-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "9102a617-91a2-4170-a0f7-3c34f1d8d0ce" (UID: "9102a617-91a2-4170-a0f7-3c34f1d8d0ce"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:20:28 crc kubenswrapper[4520]: I0130 07:20:28.885664 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9102a617-91a2-4170-a0f7-3c34f1d8d0ce-kube-api-access-xn5lx" (OuterVolumeSpecName: "kube-api-access-xn5lx") pod "9102a617-91a2-4170-a0f7-3c34f1d8d0ce" (UID: "9102a617-91a2-4170-a0f7-3c34f1d8d0ce"). InnerVolumeSpecName "kube-api-access-xn5lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:20:28 crc kubenswrapper[4520]: I0130 07:20:28.891249 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9102a617-91a2-4170-a0f7-3c34f1d8d0ce-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "9102a617-91a2-4170-a0f7-3c34f1d8d0ce" (UID: "9102a617-91a2-4170-a0f7-3c34f1d8d0ce"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:20:28 crc kubenswrapper[4520]: I0130 07:20:28.894129 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9102a617-91a2-4170-a0f7-3c34f1d8d0ce-inventory" (OuterVolumeSpecName: "inventory") pod "9102a617-91a2-4170-a0f7-3c34f1d8d0ce" (UID: "9102a617-91a2-4170-a0f7-3c34f1d8d0ce"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:20:28 crc kubenswrapper[4520]: I0130 07:20:28.896848 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9102a617-91a2-4170-a0f7-3c34f1d8d0ce-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9102a617-91a2-4170-a0f7-3c34f1d8d0ce" (UID: "9102a617-91a2-4170-a0f7-3c34f1d8d0ce"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:20:28 crc kubenswrapper[4520]: I0130 07:20:28.972396 4520 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9102a617-91a2-4170-a0f7-3c34f1d8d0ce-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:20:28 crc kubenswrapper[4520]: I0130 07:20:28.972426 4520 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9102a617-91a2-4170-a0f7-3c34f1d8d0ce-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 30 07:20:28 crc kubenswrapper[4520]: I0130 07:20:28.972435 4520 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9102a617-91a2-4170-a0f7-3c34f1d8d0ce-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 07:20:28 crc kubenswrapper[4520]: I0130 07:20:28.972445 4520 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9102a617-91a2-4170-a0f7-3c34f1d8d0ce-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 07:20:28 crc kubenswrapper[4520]: I0130 07:20:28.972454 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xn5lx\" (UniqueName: \"kubernetes.io/projected/9102a617-91a2-4170-a0f7-3c34f1d8d0ce-kube-api-access-xn5lx\") on node \"crc\" DevicePath \"\"" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.348119 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-55q7r" event={"ID":"9102a617-91a2-4170-a0f7-3c34f1d8d0ce","Type":"ContainerDied","Data":"ecedb49bbe99e2038c90be012a21d04e854e0acc9810315174e7c30047a41f4a"} Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.348381 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecedb49bbe99e2038c90be012a21d04e854e0acc9810315174e7c30047a41f4a" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.348190 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-55q7r" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.424630 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g"] Jan 30 07:20:29 crc kubenswrapper[4520]: E0130 07:20:29.425017 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9102a617-91a2-4170-a0f7-3c34f1d8d0ce" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.425038 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="9102a617-91a2-4170-a0f7-3c34f1d8d0ce" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.425221 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="9102a617-91a2-4170-a0f7-3c34f1d8d0ce" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.425829 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.429455 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r7s58" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.429907 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.430193 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.430396 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.430603 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.433865 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.433893 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.435047 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g"] Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.479957 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e1f48882-fba1-44f0-a438-6d24f531e431-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr42g\" (UID: \"e1f48882-fba1-44f0-a438-6d24f531e431\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.480050 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr42g\" (UID: \"e1f48882-fba1-44f0-a438-6d24f531e431\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.480093 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr42g\" (UID: \"e1f48882-fba1-44f0-a438-6d24f531e431\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.480124 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr42g\" (UID: \"e1f48882-fba1-44f0-a438-6d24f531e431\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.480148 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr42g\" (UID: \"e1f48882-fba1-44f0-a438-6d24f531e431\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.480252 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr42g\" (UID: \"e1f48882-fba1-44f0-a438-6d24f531e431\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.480297 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr42g\" (UID: \"e1f48882-fba1-44f0-a438-6d24f531e431\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.480323 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr42g\" (UID: \"e1f48882-fba1-44f0-a438-6d24f531e431\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.480355 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjdkt\" (UniqueName: \"kubernetes.io/projected/e1f48882-fba1-44f0-a438-6d24f531e431-kube-api-access-jjdkt\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr42g\" (UID: \"e1f48882-fba1-44f0-a438-6d24f531e431\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.581381 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr42g\" (UID: \"e1f48882-fba1-44f0-a438-6d24f531e431\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.581429 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr42g\" (UID: \"e1f48882-fba1-44f0-a438-6d24f531e431\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.581454 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr42g\" (UID: \"e1f48882-fba1-44f0-a438-6d24f531e431\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.581494 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr42g\" (UID: \"e1f48882-fba1-44f0-a438-6d24f531e431\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.581537 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr42g\" (UID: \"e1f48882-fba1-44f0-a438-6d24f531e431\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.581560 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr42g\" (UID: \"e1f48882-fba1-44f0-a438-6d24f531e431\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.581579 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjdkt\" (UniqueName: \"kubernetes.io/projected/e1f48882-fba1-44f0-a438-6d24f531e431-kube-api-access-jjdkt\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr42g\" (UID: \"e1f48882-fba1-44f0-a438-6d24f531e431\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.581603 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e1f48882-fba1-44f0-a438-6d24f531e431-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr42g\" (UID: \"e1f48882-fba1-44f0-a438-6d24f531e431\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.581651 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr42g\" (UID: \"e1f48882-fba1-44f0-a438-6d24f531e431\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.583156 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e1f48882-fba1-44f0-a438-6d24f531e431-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr42g\" (UID: \"e1f48882-fba1-44f0-a438-6d24f531e431\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.586275 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr42g\" (UID: \"e1f48882-fba1-44f0-a438-6d24f531e431\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.587939 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr42g\" (UID: \"e1f48882-fba1-44f0-a438-6d24f531e431\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.587982 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr42g\" (UID: \"e1f48882-fba1-44f0-a438-6d24f531e431\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.588210 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr42g\" (UID: \"e1f48882-fba1-44f0-a438-6d24f531e431\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.588394 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr42g\" (UID: \"e1f48882-fba1-44f0-a438-6d24f531e431\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.588657 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr42g\" (UID: \"e1f48882-fba1-44f0-a438-6d24f531e431\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.590293 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr42g\" (UID: \"e1f48882-fba1-44f0-a438-6d24f531e431\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.597012 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjdkt\" (UniqueName: \"kubernetes.io/projected/e1f48882-fba1-44f0-a438-6d24f531e431-kube-api-access-jjdkt\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hr42g\" (UID: \"e1f48882-fba1-44f0-a438-6d24f531e431\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g" Jan 30 07:20:29 crc kubenswrapper[4520]: I0130 07:20:29.760909 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g" Jan 30 07:20:30 crc kubenswrapper[4520]: I0130 07:20:30.213194 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g"] Jan 30 07:20:30 crc kubenswrapper[4520]: I0130 07:20:30.221039 4520 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 07:20:30 crc kubenswrapper[4520]: I0130 07:20:30.355348 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g" event={"ID":"e1f48882-fba1-44f0-a438-6d24f531e431","Type":"ContainerStarted","Data":"76f0bd13030cb51aaf18afca19d5227d93cba5e7f65bc0dfb39475bda1192e0a"} Jan 30 07:20:31 crc kubenswrapper[4520]: I0130 07:20:31.383956 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g" event={"ID":"e1f48882-fba1-44f0-a438-6d24f531e431","Type":"ContainerStarted","Data":"119bde3f628b1c7999eeb6a6ab8f099f3c8d1fc1c3bbd5aa74f56f18a2336bfb"} Jan 30 07:20:31 crc kubenswrapper[4520]: I0130 07:20:31.406587 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g" podStartSLOduration=1.82226897 podStartE2EDuration="2.406562661s" podCreationTimestamp="2026-01-30 07:20:29 +0000 UTC" firstStartedPulling="2026-01-30 07:20:30.220827089 +0000 UTC m=+2143.849179270" lastFinishedPulling="2026-01-30 07:20:30.80512078 +0000 UTC m=+2144.433472961" observedRunningTime="2026-01-30 07:20:31.396978922 +0000 UTC m=+2145.025331103" watchObservedRunningTime="2026-01-30 07:20:31.406562661 +0000 UTC m=+2145.034914843" Jan 30 07:20:57 crc kubenswrapper[4520]: I0130 07:20:57.793856 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 07:20:57 crc kubenswrapper[4520]: I0130 07:20:57.794329 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 07:21:27 crc kubenswrapper[4520]: I0130 07:21:27.793909 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 07:21:27 crc kubenswrapper[4520]: I0130 07:21:27.794558 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 07:21:27 crc kubenswrapper[4520]: I0130 07:21:27.794601 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" Jan 30 07:21:27 crc kubenswrapper[4520]: I0130 07:21:27.795246 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6f9d3d41b0a37515cd60005bd2f7590ed422a66a445f98c088e023f788133e52"} pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 07:21:27 crc kubenswrapper[4520]: I0130 07:21:27.795295 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" containerID="cri-o://6f9d3d41b0a37515cd60005bd2f7590ed422a66a445f98c088e023f788133e52" gracePeriod=600 Jan 30 07:21:28 crc kubenswrapper[4520]: I0130 07:21:28.760613 4520 generic.go:334] "Generic (PLEG): container finished" podID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerID="6f9d3d41b0a37515cd60005bd2f7590ed422a66a445f98c088e023f788133e52" exitCode=0 Jan 30 07:21:28 crc kubenswrapper[4520]: I0130 07:21:28.760653 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerDied","Data":"6f9d3d41b0a37515cd60005bd2f7590ed422a66a445f98c088e023f788133e52"} Jan 30 07:21:28 crc kubenswrapper[4520]: I0130 07:21:28.760958 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerStarted","Data":"1beef8304148e4cc2c3110a7989a565521e20531d370fde345640d21f67715a9"} Jan 30 07:21:28 crc kubenswrapper[4520]: I0130 07:21:28.760982 4520 scope.go:117] "RemoveContainer" containerID="3511c403ecc0670dedcbeb455988f781d984e79ff36ca09f0a0274a95f203ca7" Jan 30 07:21:42 crc kubenswrapper[4520]: I0130 07:21:42.276930 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zwgfv"] Jan 30 07:21:42 crc kubenswrapper[4520]: I0130 07:21:42.279680 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zwgfv" Jan 30 07:21:42 crc kubenswrapper[4520]: I0130 07:21:42.281144 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fbzx\" (UniqueName: \"kubernetes.io/projected/d36ab1eb-5ad2-44a4-aecb-84095d597995-kube-api-access-9fbzx\") pod \"community-operators-zwgfv\" (UID: \"d36ab1eb-5ad2-44a4-aecb-84095d597995\") " pod="openshift-marketplace/community-operators-zwgfv" Jan 30 07:21:42 crc kubenswrapper[4520]: I0130 07:21:42.281546 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d36ab1eb-5ad2-44a4-aecb-84095d597995-catalog-content\") pod \"community-operators-zwgfv\" (UID: \"d36ab1eb-5ad2-44a4-aecb-84095d597995\") " pod="openshift-marketplace/community-operators-zwgfv" Jan 30 07:21:42 crc kubenswrapper[4520]: I0130 07:21:42.281574 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d36ab1eb-5ad2-44a4-aecb-84095d597995-utilities\") pod \"community-operators-zwgfv\" (UID: \"d36ab1eb-5ad2-44a4-aecb-84095d597995\") " pod="openshift-marketplace/community-operators-zwgfv" Jan 30 07:21:42 crc kubenswrapper[4520]: I0130 07:21:42.290015 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zwgfv"] Jan 30 07:21:42 crc kubenswrapper[4520]: I0130 07:21:42.382952 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d36ab1eb-5ad2-44a4-aecb-84095d597995-catalog-content\") pod \"community-operators-zwgfv\" (UID: \"d36ab1eb-5ad2-44a4-aecb-84095d597995\") " pod="openshift-marketplace/community-operators-zwgfv" Jan 30 07:21:42 crc kubenswrapper[4520]: I0130 07:21:42.383297 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d36ab1eb-5ad2-44a4-aecb-84095d597995-utilities\") pod \"community-operators-zwgfv\" (UID: \"d36ab1eb-5ad2-44a4-aecb-84095d597995\") " pod="openshift-marketplace/community-operators-zwgfv" Jan 30 07:21:42 crc kubenswrapper[4520]: I0130 07:21:42.383416 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fbzx\" (UniqueName: \"kubernetes.io/projected/d36ab1eb-5ad2-44a4-aecb-84095d597995-kube-api-access-9fbzx\") pod \"community-operators-zwgfv\" (UID: \"d36ab1eb-5ad2-44a4-aecb-84095d597995\") " pod="openshift-marketplace/community-operators-zwgfv" Jan 30 07:21:42 crc kubenswrapper[4520]: I0130 07:21:42.383360 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d36ab1eb-5ad2-44a4-aecb-84095d597995-catalog-content\") pod \"community-operators-zwgfv\" (UID: \"d36ab1eb-5ad2-44a4-aecb-84095d597995\") " pod="openshift-marketplace/community-operators-zwgfv" Jan 30 07:21:42 crc kubenswrapper[4520]: I0130 07:21:42.383797 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d36ab1eb-5ad2-44a4-aecb-84095d597995-utilities\") pod \"community-operators-zwgfv\" (UID: \"d36ab1eb-5ad2-44a4-aecb-84095d597995\") " pod="openshift-marketplace/community-operators-zwgfv" Jan 30 07:21:42 crc kubenswrapper[4520]: I0130 07:21:42.399998 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fbzx\" (UniqueName: \"kubernetes.io/projected/d36ab1eb-5ad2-44a4-aecb-84095d597995-kube-api-access-9fbzx\") pod \"community-operators-zwgfv\" (UID: \"d36ab1eb-5ad2-44a4-aecb-84095d597995\") " pod="openshift-marketplace/community-operators-zwgfv" Jan 30 07:21:42 crc kubenswrapper[4520]: I0130 07:21:42.596902 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zwgfv" Jan 30 07:21:43 crc kubenswrapper[4520]: I0130 07:21:43.232280 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zwgfv"] Jan 30 07:21:43 crc kubenswrapper[4520]: I0130 07:21:43.871663 4520 generic.go:334] "Generic (PLEG): container finished" podID="d36ab1eb-5ad2-44a4-aecb-84095d597995" containerID="3fc4f3c631bf8fb7b1bcee804baf916123747b5238e848d166dc3494b2c18248" exitCode=0 Jan 30 07:21:43 crc kubenswrapper[4520]: I0130 07:21:43.871717 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zwgfv" event={"ID":"d36ab1eb-5ad2-44a4-aecb-84095d597995","Type":"ContainerDied","Data":"3fc4f3c631bf8fb7b1bcee804baf916123747b5238e848d166dc3494b2c18248"} Jan 30 07:21:43 crc kubenswrapper[4520]: I0130 07:21:43.871980 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zwgfv" event={"ID":"d36ab1eb-5ad2-44a4-aecb-84095d597995","Type":"ContainerStarted","Data":"c6a44c551d91ff55bbd2c9c06d72e7695a95e806367cda51ba56533d737dfdaa"} Jan 30 07:21:44 crc kubenswrapper[4520]: I0130 07:21:44.889247 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zwgfv" event={"ID":"d36ab1eb-5ad2-44a4-aecb-84095d597995","Type":"ContainerStarted","Data":"b4e6378fa204f5267484bf13d11d587304f8bcfbdc1465d2b79fa4e017e415c8"} Jan 30 07:21:45 crc kubenswrapper[4520]: I0130 07:21:45.469224 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bxlfg"] Jan 30 07:21:45 crc kubenswrapper[4520]: I0130 07:21:45.470998 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bxlfg" Jan 30 07:21:45 crc kubenswrapper[4520]: I0130 07:21:45.487075 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bxlfg"] Jan 30 07:21:45 crc kubenswrapper[4520]: I0130 07:21:45.571562 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1807083e-f363-4587-932a-137eb9feaaec-utilities\") pod \"redhat-marketplace-bxlfg\" (UID: \"1807083e-f363-4587-932a-137eb9feaaec\") " pod="openshift-marketplace/redhat-marketplace-bxlfg" Jan 30 07:21:45 crc kubenswrapper[4520]: I0130 07:21:45.571821 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1807083e-f363-4587-932a-137eb9feaaec-catalog-content\") pod \"redhat-marketplace-bxlfg\" (UID: \"1807083e-f363-4587-932a-137eb9feaaec\") " pod="openshift-marketplace/redhat-marketplace-bxlfg" Jan 30 07:21:45 crc kubenswrapper[4520]: I0130 07:21:45.572043 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtds8\" (UniqueName: \"kubernetes.io/projected/1807083e-f363-4587-932a-137eb9feaaec-kube-api-access-mtds8\") pod \"redhat-marketplace-bxlfg\" (UID: \"1807083e-f363-4587-932a-137eb9feaaec\") " pod="openshift-marketplace/redhat-marketplace-bxlfg" Jan 30 07:21:45 crc kubenswrapper[4520]: I0130 07:21:45.674222 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1807083e-f363-4587-932a-137eb9feaaec-catalog-content\") pod \"redhat-marketplace-bxlfg\" (UID: \"1807083e-f363-4587-932a-137eb9feaaec\") " pod="openshift-marketplace/redhat-marketplace-bxlfg" Jan 30 07:21:45 crc kubenswrapper[4520]: I0130 07:21:45.674348 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtds8\" (UniqueName: \"kubernetes.io/projected/1807083e-f363-4587-932a-137eb9feaaec-kube-api-access-mtds8\") pod \"redhat-marketplace-bxlfg\" (UID: \"1807083e-f363-4587-932a-137eb9feaaec\") " pod="openshift-marketplace/redhat-marketplace-bxlfg" Jan 30 07:21:45 crc kubenswrapper[4520]: I0130 07:21:45.674382 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1807083e-f363-4587-932a-137eb9feaaec-utilities\") pod \"redhat-marketplace-bxlfg\" (UID: \"1807083e-f363-4587-932a-137eb9feaaec\") " pod="openshift-marketplace/redhat-marketplace-bxlfg" Jan 30 07:21:45 crc kubenswrapper[4520]: I0130 07:21:45.674846 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1807083e-f363-4587-932a-137eb9feaaec-catalog-content\") pod \"redhat-marketplace-bxlfg\" (UID: \"1807083e-f363-4587-932a-137eb9feaaec\") " pod="openshift-marketplace/redhat-marketplace-bxlfg" Jan 30 07:21:45 crc kubenswrapper[4520]: I0130 07:21:45.674898 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1807083e-f363-4587-932a-137eb9feaaec-utilities\") pod \"redhat-marketplace-bxlfg\" (UID: \"1807083e-f363-4587-932a-137eb9feaaec\") " pod="openshift-marketplace/redhat-marketplace-bxlfg" Jan 30 07:21:45 crc kubenswrapper[4520]: I0130 07:21:45.695864 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtds8\" (UniqueName: \"kubernetes.io/projected/1807083e-f363-4587-932a-137eb9feaaec-kube-api-access-mtds8\") pod \"redhat-marketplace-bxlfg\" (UID: \"1807083e-f363-4587-932a-137eb9feaaec\") " pod="openshift-marketplace/redhat-marketplace-bxlfg" Jan 30 07:21:45 crc kubenswrapper[4520]: I0130 07:21:45.793195 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bxlfg" Jan 30 07:21:45 crc kubenswrapper[4520]: I0130 07:21:45.905932 4520 generic.go:334] "Generic (PLEG): container finished" podID="d36ab1eb-5ad2-44a4-aecb-84095d597995" containerID="b4e6378fa204f5267484bf13d11d587304f8bcfbdc1465d2b79fa4e017e415c8" exitCode=0 Jan 30 07:21:45 crc kubenswrapper[4520]: I0130 07:21:45.906150 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zwgfv" event={"ID":"d36ab1eb-5ad2-44a4-aecb-84095d597995","Type":"ContainerDied","Data":"b4e6378fa204f5267484bf13d11d587304f8bcfbdc1465d2b79fa4e017e415c8"} Jan 30 07:21:46 crc kubenswrapper[4520]: I0130 07:21:46.248983 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bxlfg"] Jan 30 07:21:46 crc kubenswrapper[4520]: I0130 07:21:46.923422 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zwgfv" event={"ID":"d36ab1eb-5ad2-44a4-aecb-84095d597995","Type":"ContainerStarted","Data":"77c6b3bdd5776ea9ead0aad82af2d657508803c3990a8fd809f9452641a059ef"} Jan 30 07:21:46 crc kubenswrapper[4520]: I0130 07:21:46.924743 4520 generic.go:334] "Generic (PLEG): container finished" podID="1807083e-f363-4587-932a-137eb9feaaec" containerID="7825e5640c25acc035b6f0e466f7cbc92a843e486711dcd3eb78e01ecfc18f72" exitCode=0 Jan 30 07:21:46 crc kubenswrapper[4520]: I0130 07:21:46.924779 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxlfg" event={"ID":"1807083e-f363-4587-932a-137eb9feaaec","Type":"ContainerDied","Data":"7825e5640c25acc035b6f0e466f7cbc92a843e486711dcd3eb78e01ecfc18f72"} Jan 30 07:21:46 crc kubenswrapper[4520]: I0130 07:21:46.924813 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxlfg" event={"ID":"1807083e-f363-4587-932a-137eb9feaaec","Type":"ContainerStarted","Data":"7123303de61d4a0ed8aca6fec9488d5225cc7910e8cf4d33fb5892434be15b87"} Jan 30 07:21:46 crc kubenswrapper[4520]: I0130 07:21:46.948473 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zwgfv" podStartSLOduration=2.424125864 podStartE2EDuration="4.948455797s" podCreationTimestamp="2026-01-30 07:21:42 +0000 UTC" firstStartedPulling="2026-01-30 07:21:43.875036046 +0000 UTC m=+2217.503388227" lastFinishedPulling="2026-01-30 07:21:46.399365978 +0000 UTC m=+2220.027718160" observedRunningTime="2026-01-30 07:21:46.942487646 +0000 UTC m=+2220.570839828" watchObservedRunningTime="2026-01-30 07:21:46.948455797 +0000 UTC m=+2220.576807978" Jan 30 07:21:47 crc kubenswrapper[4520]: I0130 07:21:47.941408 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxlfg" event={"ID":"1807083e-f363-4587-932a-137eb9feaaec","Type":"ContainerStarted","Data":"b63da99220dd957a87e825469eef5a519edea36dce0af143c053dfb4a3d37d83"} Jan 30 07:21:48 crc kubenswrapper[4520]: I0130 07:21:48.949672 4520 generic.go:334] "Generic (PLEG): container finished" podID="1807083e-f363-4587-932a-137eb9feaaec" containerID="b63da99220dd957a87e825469eef5a519edea36dce0af143c053dfb4a3d37d83" exitCode=0 Jan 30 07:21:48 crc kubenswrapper[4520]: I0130 07:21:48.950111 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxlfg" event={"ID":"1807083e-f363-4587-932a-137eb9feaaec","Type":"ContainerDied","Data":"b63da99220dd957a87e825469eef5a519edea36dce0af143c053dfb4a3d37d83"} Jan 30 07:21:49 crc kubenswrapper[4520]: I0130 07:21:49.962672 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxlfg" event={"ID":"1807083e-f363-4587-932a-137eb9feaaec","Type":"ContainerStarted","Data":"c28bccf45185c463ce3cc8784be0d69a41a775449e0aa1b86773a293d927b52e"} Jan 30 07:21:49 crc kubenswrapper[4520]: I0130 07:21:49.985724 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bxlfg" podStartSLOduration=2.489139471 podStartE2EDuration="4.985706143s" podCreationTimestamp="2026-01-30 07:21:45 +0000 UTC" firstStartedPulling="2026-01-30 07:21:46.926087459 +0000 UTC m=+2220.554439641" lastFinishedPulling="2026-01-30 07:21:49.422654132 +0000 UTC m=+2223.051006313" observedRunningTime="2026-01-30 07:21:49.981482481 +0000 UTC m=+2223.609834662" watchObservedRunningTime="2026-01-30 07:21:49.985706143 +0000 UTC m=+2223.614058324" Jan 30 07:21:52 crc kubenswrapper[4520]: I0130 07:21:52.597090 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zwgfv" Jan 30 07:21:52 crc kubenswrapper[4520]: I0130 07:21:52.597321 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zwgfv" Jan 30 07:21:52 crc kubenswrapper[4520]: I0130 07:21:52.647356 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zwgfv" Jan 30 07:21:53 crc kubenswrapper[4520]: I0130 07:21:53.023471 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zwgfv" Jan 30 07:21:54 crc kubenswrapper[4520]: I0130 07:21:54.070677 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zwgfv"] Jan 30 07:21:55 crc kubenswrapper[4520]: I0130 07:21:55.001064 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zwgfv" podUID="d36ab1eb-5ad2-44a4-aecb-84095d597995" containerName="registry-server" containerID="cri-o://77c6b3bdd5776ea9ead0aad82af2d657508803c3990a8fd809f9452641a059ef" gracePeriod=2 Jan 30 07:21:55 crc kubenswrapper[4520]: I0130 07:21:55.400440 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zwgfv" Jan 30 07:21:55 crc kubenswrapper[4520]: I0130 07:21:55.585905 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d36ab1eb-5ad2-44a4-aecb-84095d597995-catalog-content\") pod \"d36ab1eb-5ad2-44a4-aecb-84095d597995\" (UID: \"d36ab1eb-5ad2-44a4-aecb-84095d597995\") " Jan 30 07:21:55 crc kubenswrapper[4520]: I0130 07:21:55.585965 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d36ab1eb-5ad2-44a4-aecb-84095d597995-utilities\") pod \"d36ab1eb-5ad2-44a4-aecb-84095d597995\" (UID: \"d36ab1eb-5ad2-44a4-aecb-84095d597995\") " Jan 30 07:21:55 crc kubenswrapper[4520]: I0130 07:21:55.586111 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fbzx\" (UniqueName: \"kubernetes.io/projected/d36ab1eb-5ad2-44a4-aecb-84095d597995-kube-api-access-9fbzx\") pod \"d36ab1eb-5ad2-44a4-aecb-84095d597995\" (UID: \"d36ab1eb-5ad2-44a4-aecb-84095d597995\") " Jan 30 07:21:55 crc kubenswrapper[4520]: I0130 07:21:55.586647 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d36ab1eb-5ad2-44a4-aecb-84095d597995-utilities" (OuterVolumeSpecName: "utilities") pod "d36ab1eb-5ad2-44a4-aecb-84095d597995" (UID: "d36ab1eb-5ad2-44a4-aecb-84095d597995"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:21:55 crc kubenswrapper[4520]: I0130 07:21:55.592497 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d36ab1eb-5ad2-44a4-aecb-84095d597995-kube-api-access-9fbzx" (OuterVolumeSpecName: "kube-api-access-9fbzx") pod "d36ab1eb-5ad2-44a4-aecb-84095d597995" (UID: "d36ab1eb-5ad2-44a4-aecb-84095d597995"). InnerVolumeSpecName "kube-api-access-9fbzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:21:55 crc kubenswrapper[4520]: I0130 07:21:55.630297 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d36ab1eb-5ad2-44a4-aecb-84095d597995-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d36ab1eb-5ad2-44a4-aecb-84095d597995" (UID: "d36ab1eb-5ad2-44a4-aecb-84095d597995"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:21:55 crc kubenswrapper[4520]: I0130 07:21:55.687924 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d36ab1eb-5ad2-44a4-aecb-84095d597995-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 07:21:55 crc kubenswrapper[4520]: I0130 07:21:55.687955 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d36ab1eb-5ad2-44a4-aecb-84095d597995-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 07:21:55 crc kubenswrapper[4520]: I0130 07:21:55.687968 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fbzx\" (UniqueName: \"kubernetes.io/projected/d36ab1eb-5ad2-44a4-aecb-84095d597995-kube-api-access-9fbzx\") on node \"crc\" DevicePath \"\"" Jan 30 07:21:55 crc kubenswrapper[4520]: I0130 07:21:55.794536 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bxlfg" Jan 30 07:21:55 crc kubenswrapper[4520]: I0130 07:21:55.794597 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bxlfg" Jan 30 07:21:55 crc kubenswrapper[4520]: I0130 07:21:55.833086 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bxlfg" Jan 30 07:21:56 crc kubenswrapper[4520]: I0130 07:21:56.013270 4520 generic.go:334] "Generic (PLEG): container finished" podID="d36ab1eb-5ad2-44a4-aecb-84095d597995" containerID="77c6b3bdd5776ea9ead0aad82af2d657508803c3990a8fd809f9452641a059ef" exitCode=0 Jan 30 07:21:56 crc kubenswrapper[4520]: I0130 07:21:56.013370 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zwgfv" event={"ID":"d36ab1eb-5ad2-44a4-aecb-84095d597995","Type":"ContainerDied","Data":"77c6b3bdd5776ea9ead0aad82af2d657508803c3990a8fd809f9452641a059ef"} Jan 30 07:21:56 crc kubenswrapper[4520]: I0130 07:21:56.013408 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zwgfv" Jan 30 07:21:56 crc kubenswrapper[4520]: I0130 07:21:56.013438 4520 scope.go:117] "RemoveContainer" containerID="77c6b3bdd5776ea9ead0aad82af2d657508803c3990a8fd809f9452641a059ef" Jan 30 07:21:56 crc kubenswrapper[4520]: I0130 07:21:56.013423 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zwgfv" event={"ID":"d36ab1eb-5ad2-44a4-aecb-84095d597995","Type":"ContainerDied","Data":"c6a44c551d91ff55bbd2c9c06d72e7695a95e806367cda51ba56533d737dfdaa"} Jan 30 07:21:56 crc kubenswrapper[4520]: I0130 07:21:56.031452 4520 scope.go:117] "RemoveContainer" containerID="b4e6378fa204f5267484bf13d11d587304f8bcfbdc1465d2b79fa4e017e415c8" Jan 30 07:21:56 crc kubenswrapper[4520]: I0130 07:21:56.043468 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zwgfv"] Jan 30 07:21:56 crc kubenswrapper[4520]: I0130 07:21:56.055945 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zwgfv"] Jan 30 07:21:56 crc kubenswrapper[4520]: I0130 07:21:56.062060 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bxlfg" Jan 30 07:21:56 crc kubenswrapper[4520]: I0130 07:21:56.063865 4520 scope.go:117] "RemoveContainer" containerID="3fc4f3c631bf8fb7b1bcee804baf916123747b5238e848d166dc3494b2c18248" Jan 30 07:21:56 crc kubenswrapper[4520]: I0130 07:21:56.100602 4520 scope.go:117] "RemoveContainer" containerID="77c6b3bdd5776ea9ead0aad82af2d657508803c3990a8fd809f9452641a059ef" Jan 30 07:21:56 crc kubenswrapper[4520]: E0130 07:21:56.101097 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77c6b3bdd5776ea9ead0aad82af2d657508803c3990a8fd809f9452641a059ef\": container with ID starting with 77c6b3bdd5776ea9ead0aad82af2d657508803c3990a8fd809f9452641a059ef not found: ID does not exist" containerID="77c6b3bdd5776ea9ead0aad82af2d657508803c3990a8fd809f9452641a059ef" Jan 30 07:21:56 crc kubenswrapper[4520]: I0130 07:21:56.101134 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77c6b3bdd5776ea9ead0aad82af2d657508803c3990a8fd809f9452641a059ef"} err="failed to get container status \"77c6b3bdd5776ea9ead0aad82af2d657508803c3990a8fd809f9452641a059ef\": rpc error: code = NotFound desc = could not find container \"77c6b3bdd5776ea9ead0aad82af2d657508803c3990a8fd809f9452641a059ef\": container with ID starting with 77c6b3bdd5776ea9ead0aad82af2d657508803c3990a8fd809f9452641a059ef not found: ID does not exist" Jan 30 07:21:56 crc kubenswrapper[4520]: I0130 07:21:56.101157 4520 scope.go:117] "RemoveContainer" containerID="b4e6378fa204f5267484bf13d11d587304f8bcfbdc1465d2b79fa4e017e415c8" Jan 30 07:21:56 crc kubenswrapper[4520]: E0130 07:21:56.101508 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4e6378fa204f5267484bf13d11d587304f8bcfbdc1465d2b79fa4e017e415c8\": container with ID starting with b4e6378fa204f5267484bf13d11d587304f8bcfbdc1465d2b79fa4e017e415c8 not found: ID does not exist" containerID="b4e6378fa204f5267484bf13d11d587304f8bcfbdc1465d2b79fa4e017e415c8" Jan 30 07:21:56 crc kubenswrapper[4520]: I0130 07:21:56.101570 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4e6378fa204f5267484bf13d11d587304f8bcfbdc1465d2b79fa4e017e415c8"} err="failed to get container status \"b4e6378fa204f5267484bf13d11d587304f8bcfbdc1465d2b79fa4e017e415c8\": rpc error: code = NotFound desc = could not find container \"b4e6378fa204f5267484bf13d11d587304f8bcfbdc1465d2b79fa4e017e415c8\": container with ID starting with b4e6378fa204f5267484bf13d11d587304f8bcfbdc1465d2b79fa4e017e415c8 not found: ID does not exist" Jan 30 07:21:56 crc kubenswrapper[4520]: I0130 07:21:56.101587 4520 scope.go:117] "RemoveContainer" containerID="3fc4f3c631bf8fb7b1bcee804baf916123747b5238e848d166dc3494b2c18248" Jan 30 07:21:56 crc kubenswrapper[4520]: E0130 07:21:56.101947 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fc4f3c631bf8fb7b1bcee804baf916123747b5238e848d166dc3494b2c18248\": container with ID starting with 3fc4f3c631bf8fb7b1bcee804baf916123747b5238e848d166dc3494b2c18248 not found: ID does not exist" containerID="3fc4f3c631bf8fb7b1bcee804baf916123747b5238e848d166dc3494b2c18248" Jan 30 07:21:56 crc kubenswrapper[4520]: I0130 07:21:56.101971 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fc4f3c631bf8fb7b1bcee804baf916123747b5238e848d166dc3494b2c18248"} err="failed to get container status \"3fc4f3c631bf8fb7b1bcee804baf916123747b5238e848d166dc3494b2c18248\": rpc error: code = NotFound desc = could not find container \"3fc4f3c631bf8fb7b1bcee804baf916123747b5238e848d166dc3494b2c18248\": container with ID starting with 3fc4f3c631bf8fb7b1bcee804baf916123747b5238e848d166dc3494b2c18248 not found: ID does not exist" Jan 30 07:21:56 crc kubenswrapper[4520]: I0130 07:21:56.706978 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d36ab1eb-5ad2-44a4-aecb-84095d597995" path="/var/lib/kubelet/pods/d36ab1eb-5ad2-44a4-aecb-84095d597995/volumes" Jan 30 07:21:58 crc kubenswrapper[4520]: I0130 07:21:58.066684 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bxlfg"] Jan 30 07:21:58 crc kubenswrapper[4520]: I0130 07:21:58.067929 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bxlfg" podUID="1807083e-f363-4587-932a-137eb9feaaec" containerName="registry-server" containerID="cri-o://c28bccf45185c463ce3cc8784be0d69a41a775449e0aa1b86773a293d927b52e" gracePeriod=2 Jan 30 07:21:58 crc kubenswrapper[4520]: I0130 07:21:58.458009 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bxlfg" Jan 30 07:21:58 crc kubenswrapper[4520]: I0130 07:21:58.547195 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtds8\" (UniqueName: \"kubernetes.io/projected/1807083e-f363-4587-932a-137eb9feaaec-kube-api-access-mtds8\") pod \"1807083e-f363-4587-932a-137eb9feaaec\" (UID: \"1807083e-f363-4587-932a-137eb9feaaec\") " Jan 30 07:21:58 crc kubenswrapper[4520]: I0130 07:21:58.547262 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1807083e-f363-4587-932a-137eb9feaaec-catalog-content\") pod \"1807083e-f363-4587-932a-137eb9feaaec\" (UID: \"1807083e-f363-4587-932a-137eb9feaaec\") " Jan 30 07:21:58 crc kubenswrapper[4520]: I0130 07:21:58.547341 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1807083e-f363-4587-932a-137eb9feaaec-utilities\") pod \"1807083e-f363-4587-932a-137eb9feaaec\" (UID: \"1807083e-f363-4587-932a-137eb9feaaec\") " Jan 30 07:21:58 crc kubenswrapper[4520]: I0130 07:21:58.548097 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1807083e-f363-4587-932a-137eb9feaaec-utilities" (OuterVolumeSpecName: "utilities") pod "1807083e-f363-4587-932a-137eb9feaaec" (UID: "1807083e-f363-4587-932a-137eb9feaaec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:21:58 crc kubenswrapper[4520]: I0130 07:21:58.552356 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1807083e-f363-4587-932a-137eb9feaaec-kube-api-access-mtds8" (OuterVolumeSpecName: "kube-api-access-mtds8") pod "1807083e-f363-4587-932a-137eb9feaaec" (UID: "1807083e-f363-4587-932a-137eb9feaaec"). InnerVolumeSpecName "kube-api-access-mtds8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:21:58 crc kubenswrapper[4520]: I0130 07:21:58.567425 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1807083e-f363-4587-932a-137eb9feaaec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1807083e-f363-4587-932a-137eb9feaaec" (UID: "1807083e-f363-4587-932a-137eb9feaaec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:21:58 crc kubenswrapper[4520]: I0130 07:21:58.649039 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtds8\" (UniqueName: \"kubernetes.io/projected/1807083e-f363-4587-932a-137eb9feaaec-kube-api-access-mtds8\") on node \"crc\" DevicePath \"\"" Jan 30 07:21:58 crc kubenswrapper[4520]: I0130 07:21:58.649073 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1807083e-f363-4587-932a-137eb9feaaec-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 07:21:58 crc kubenswrapper[4520]: I0130 07:21:58.649086 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1807083e-f363-4587-932a-137eb9feaaec-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 07:21:59 crc kubenswrapper[4520]: I0130 07:21:59.040276 4520 generic.go:334] "Generic (PLEG): container finished" podID="1807083e-f363-4587-932a-137eb9feaaec" containerID="c28bccf45185c463ce3cc8784be0d69a41a775449e0aa1b86773a293d927b52e" exitCode=0 Jan 30 07:21:59 crc kubenswrapper[4520]: I0130 07:21:59.040338 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxlfg" event={"ID":"1807083e-f363-4587-932a-137eb9feaaec","Type":"ContainerDied","Data":"c28bccf45185c463ce3cc8784be0d69a41a775449e0aa1b86773a293d927b52e"} Jan 30 07:21:59 crc kubenswrapper[4520]: I0130 07:21:59.040375 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxlfg" event={"ID":"1807083e-f363-4587-932a-137eb9feaaec","Type":"ContainerDied","Data":"7123303de61d4a0ed8aca6fec9488d5225cc7910e8cf4d33fb5892434be15b87"} Jan 30 07:21:59 crc kubenswrapper[4520]: I0130 07:21:59.040394 4520 scope.go:117] "RemoveContainer" containerID="c28bccf45185c463ce3cc8784be0d69a41a775449e0aa1b86773a293d927b52e" Jan 30 07:21:59 crc kubenswrapper[4520]: I0130 07:21:59.040566 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bxlfg" Jan 30 07:21:59 crc kubenswrapper[4520]: I0130 07:21:59.059088 4520 scope.go:117] "RemoveContainer" containerID="b63da99220dd957a87e825469eef5a519edea36dce0af143c053dfb4a3d37d83" Jan 30 07:21:59 crc kubenswrapper[4520]: I0130 07:21:59.063145 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bxlfg"] Jan 30 07:21:59 crc kubenswrapper[4520]: I0130 07:21:59.075344 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bxlfg"] Jan 30 07:21:59 crc kubenswrapper[4520]: I0130 07:21:59.083279 4520 scope.go:117] "RemoveContainer" containerID="7825e5640c25acc035b6f0e466f7cbc92a843e486711dcd3eb78e01ecfc18f72" Jan 30 07:21:59 crc kubenswrapper[4520]: I0130 07:21:59.117426 4520 scope.go:117] "RemoveContainer" containerID="c28bccf45185c463ce3cc8784be0d69a41a775449e0aa1b86773a293d927b52e" Jan 30 07:21:59 crc kubenswrapper[4520]: E0130 07:21:59.118009 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c28bccf45185c463ce3cc8784be0d69a41a775449e0aa1b86773a293d927b52e\": container with ID starting with c28bccf45185c463ce3cc8784be0d69a41a775449e0aa1b86773a293d927b52e not found: ID does not exist" containerID="c28bccf45185c463ce3cc8784be0d69a41a775449e0aa1b86773a293d927b52e" Jan 30 07:21:59 crc kubenswrapper[4520]: I0130 07:21:59.118081 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c28bccf45185c463ce3cc8784be0d69a41a775449e0aa1b86773a293d927b52e"} err="failed to get container status \"c28bccf45185c463ce3cc8784be0d69a41a775449e0aa1b86773a293d927b52e\": rpc error: code = NotFound desc = could not find container \"c28bccf45185c463ce3cc8784be0d69a41a775449e0aa1b86773a293d927b52e\": container with ID starting with c28bccf45185c463ce3cc8784be0d69a41a775449e0aa1b86773a293d927b52e not found: ID does not exist" Jan 30 07:21:59 crc kubenswrapper[4520]: I0130 07:21:59.118112 4520 scope.go:117] "RemoveContainer" containerID="b63da99220dd957a87e825469eef5a519edea36dce0af143c053dfb4a3d37d83" Jan 30 07:21:59 crc kubenswrapper[4520]: E0130 07:21:59.118510 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b63da99220dd957a87e825469eef5a519edea36dce0af143c053dfb4a3d37d83\": container with ID starting with b63da99220dd957a87e825469eef5a519edea36dce0af143c053dfb4a3d37d83 not found: ID does not exist" containerID="b63da99220dd957a87e825469eef5a519edea36dce0af143c053dfb4a3d37d83" Jan 30 07:21:59 crc kubenswrapper[4520]: I0130 07:21:59.118570 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b63da99220dd957a87e825469eef5a519edea36dce0af143c053dfb4a3d37d83"} err="failed to get container status \"b63da99220dd957a87e825469eef5a519edea36dce0af143c053dfb4a3d37d83\": rpc error: code = NotFound desc = could not find container \"b63da99220dd957a87e825469eef5a519edea36dce0af143c053dfb4a3d37d83\": container with ID starting with b63da99220dd957a87e825469eef5a519edea36dce0af143c053dfb4a3d37d83 not found: ID does not exist" Jan 30 07:21:59 crc kubenswrapper[4520]: I0130 07:21:59.118596 4520 scope.go:117] "RemoveContainer" containerID="7825e5640c25acc035b6f0e466f7cbc92a843e486711dcd3eb78e01ecfc18f72" Jan 30 07:21:59 crc kubenswrapper[4520]: E0130 07:21:59.118867 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7825e5640c25acc035b6f0e466f7cbc92a843e486711dcd3eb78e01ecfc18f72\": container with ID starting with 7825e5640c25acc035b6f0e466f7cbc92a843e486711dcd3eb78e01ecfc18f72 not found: ID does not exist" containerID="7825e5640c25acc035b6f0e466f7cbc92a843e486711dcd3eb78e01ecfc18f72" Jan 30 07:21:59 crc kubenswrapper[4520]: I0130 07:21:59.118887 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7825e5640c25acc035b6f0e466f7cbc92a843e486711dcd3eb78e01ecfc18f72"} err="failed to get container status \"7825e5640c25acc035b6f0e466f7cbc92a843e486711dcd3eb78e01ecfc18f72\": rpc error: code = NotFound desc = could not find container \"7825e5640c25acc035b6f0e466f7cbc92a843e486711dcd3eb78e01ecfc18f72\": container with ID starting with 7825e5640c25acc035b6f0e466f7cbc92a843e486711dcd3eb78e01ecfc18f72 not found: ID does not exist" Jan 30 07:22:00 crc kubenswrapper[4520]: I0130 07:22:00.700704 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1807083e-f363-4587-932a-137eb9feaaec" path="/var/lib/kubelet/pods/1807083e-f363-4587-932a-137eb9feaaec/volumes" Jan 30 07:22:17 crc kubenswrapper[4520]: I0130 07:22:17.179230 4520 generic.go:334] "Generic (PLEG): container finished" podID="e1f48882-fba1-44f0-a438-6d24f531e431" containerID="119bde3f628b1c7999eeb6a6ab8f099f3c8d1fc1c3bbd5aa74f56f18a2336bfb" exitCode=0 Jan 30 07:22:17 crc kubenswrapper[4520]: I0130 07:22:17.179381 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g" event={"ID":"e1f48882-fba1-44f0-a438-6d24f531e431","Type":"ContainerDied","Data":"119bde3f628b1c7999eeb6a6ab8f099f3c8d1fc1c3bbd5aa74f56f18a2336bfb"} Jan 30 07:22:18 crc kubenswrapper[4520]: I0130 07:22:18.623493 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g" Jan 30 07:22:18 crc kubenswrapper[4520]: I0130 07:22:18.761047 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-nova-migration-ssh-key-1\") pod \"e1f48882-fba1-44f0-a438-6d24f531e431\" (UID: \"e1f48882-fba1-44f0-a438-6d24f531e431\") " Jan 30 07:22:18 crc kubenswrapper[4520]: I0130 07:22:18.761099 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-nova-cell1-compute-config-0\") pod \"e1f48882-fba1-44f0-a438-6d24f531e431\" (UID: \"e1f48882-fba1-44f0-a438-6d24f531e431\") " Jan 30 07:22:18 crc kubenswrapper[4520]: I0130 07:22:18.761122 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e1f48882-fba1-44f0-a438-6d24f531e431-nova-extra-config-0\") pod \"e1f48882-fba1-44f0-a438-6d24f531e431\" (UID: \"e1f48882-fba1-44f0-a438-6d24f531e431\") " Jan 30 07:22:18 crc kubenswrapper[4520]: I0130 07:22:18.761138 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-nova-cell1-compute-config-1\") pod \"e1f48882-fba1-44f0-a438-6d24f531e431\" (UID: \"e1f48882-fba1-44f0-a438-6d24f531e431\") " Jan 30 07:22:18 crc kubenswrapper[4520]: I0130 07:22:18.761175 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-inventory\") pod \"e1f48882-fba1-44f0-a438-6d24f531e431\" (UID: \"e1f48882-fba1-44f0-a438-6d24f531e431\") " Jan 30 07:22:18 crc kubenswrapper[4520]: I0130 07:22:18.761820 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-nova-combined-ca-bundle\") pod \"e1f48882-fba1-44f0-a438-6d24f531e431\" (UID: \"e1f48882-fba1-44f0-a438-6d24f531e431\") " Jan 30 07:22:18 crc kubenswrapper[4520]: I0130 07:22:18.761889 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-nova-migration-ssh-key-0\") pod \"e1f48882-fba1-44f0-a438-6d24f531e431\" (UID: \"e1f48882-fba1-44f0-a438-6d24f531e431\") " Jan 30 07:22:18 crc kubenswrapper[4520]: I0130 07:22:18.761942 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjdkt\" (UniqueName: \"kubernetes.io/projected/e1f48882-fba1-44f0-a438-6d24f531e431-kube-api-access-jjdkt\") pod \"e1f48882-fba1-44f0-a438-6d24f531e431\" (UID: \"e1f48882-fba1-44f0-a438-6d24f531e431\") " Jan 30 07:22:18 crc kubenswrapper[4520]: I0130 07:22:18.762039 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-ssh-key-openstack-edpm-ipam\") pod \"e1f48882-fba1-44f0-a438-6d24f531e431\" (UID: \"e1f48882-fba1-44f0-a438-6d24f531e431\") " Jan 30 07:22:18 crc kubenswrapper[4520]: I0130 07:22:18.785022 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "e1f48882-fba1-44f0-a438-6d24f531e431" (UID: "e1f48882-fba1-44f0-a438-6d24f531e431"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:22:18 crc kubenswrapper[4520]: I0130 07:22:18.793259 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1f48882-fba1-44f0-a438-6d24f531e431-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "e1f48882-fba1-44f0-a438-6d24f531e431" (UID: "e1f48882-fba1-44f0-a438-6d24f531e431"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:22:18 crc kubenswrapper[4520]: I0130 07:22:18.794662 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1f48882-fba1-44f0-a438-6d24f531e431-kube-api-access-jjdkt" (OuterVolumeSpecName: "kube-api-access-jjdkt") pod "e1f48882-fba1-44f0-a438-6d24f531e431" (UID: "e1f48882-fba1-44f0-a438-6d24f531e431"). InnerVolumeSpecName "kube-api-access-jjdkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:22:18 crc kubenswrapper[4520]: I0130 07:22:18.796329 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "e1f48882-fba1-44f0-a438-6d24f531e431" (UID: "e1f48882-fba1-44f0-a438-6d24f531e431"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:22:18 crc kubenswrapper[4520]: I0130 07:22:18.798067 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-inventory" (OuterVolumeSpecName: "inventory") pod "e1f48882-fba1-44f0-a438-6d24f531e431" (UID: "e1f48882-fba1-44f0-a438-6d24f531e431"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:22:18 crc kubenswrapper[4520]: I0130 07:22:18.798391 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "e1f48882-fba1-44f0-a438-6d24f531e431" (UID: "e1f48882-fba1-44f0-a438-6d24f531e431"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:22:18 crc kubenswrapper[4520]: I0130 07:22:18.804928 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "e1f48882-fba1-44f0-a438-6d24f531e431" (UID: "e1f48882-fba1-44f0-a438-6d24f531e431"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:22:18 crc kubenswrapper[4520]: I0130 07:22:18.810446 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e1f48882-fba1-44f0-a438-6d24f531e431" (UID: "e1f48882-fba1-44f0-a438-6d24f531e431"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:22:18 crc kubenswrapper[4520]: I0130 07:22:18.812270 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "e1f48882-fba1-44f0-a438-6d24f531e431" (UID: "e1f48882-fba1-44f0-a438-6d24f531e431"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:22:18 crc kubenswrapper[4520]: I0130 07:22:18.865466 4520 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 30 07:22:18 crc kubenswrapper[4520]: I0130 07:22:18.865501 4520 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 07:22:18 crc kubenswrapper[4520]: I0130 07:22:18.865544 4520 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e1f48882-fba1-44f0-a438-6d24f531e431-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 07:22:18 crc kubenswrapper[4520]: I0130 07:22:18.865557 4520 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 30 07:22:18 crc kubenswrapper[4520]: I0130 07:22:18.865571 4520 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 07:22:18 crc kubenswrapper[4520]: I0130 07:22:18.865582 4520 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:22:18 crc kubenswrapper[4520]: I0130 07:22:18.865591 4520 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 30 07:22:18 crc kubenswrapper[4520]: I0130 07:22:18.865601 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjdkt\" (UniqueName: \"kubernetes.io/projected/e1f48882-fba1-44f0-a438-6d24f531e431-kube-api-access-jjdkt\") on node \"crc\" DevicePath \"\"" Jan 30 07:22:18 crc kubenswrapper[4520]: I0130 07:22:18.865611 4520 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e1f48882-fba1-44f0-a438-6d24f531e431-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.200336 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g" event={"ID":"e1f48882-fba1-44f0-a438-6d24f531e431","Type":"ContainerDied","Data":"76f0bd13030cb51aaf18afca19d5227d93cba5e7f65bc0dfb39475bda1192e0a"} Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.200381 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hr42g" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.200385 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76f0bd13030cb51aaf18afca19d5227d93cba5e7f65bc0dfb39475bda1192e0a" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.291151 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v"] Jan 30 07:22:19 crc kubenswrapper[4520]: E0130 07:22:19.291811 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d36ab1eb-5ad2-44a4-aecb-84095d597995" containerName="extract-utilities" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.291830 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="d36ab1eb-5ad2-44a4-aecb-84095d597995" containerName="extract-utilities" Jan 30 07:22:19 crc kubenswrapper[4520]: E0130 07:22:19.291844 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d36ab1eb-5ad2-44a4-aecb-84095d597995" containerName="registry-server" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.291849 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="d36ab1eb-5ad2-44a4-aecb-84095d597995" containerName="registry-server" Jan 30 07:22:19 crc kubenswrapper[4520]: E0130 07:22:19.291871 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1807083e-f363-4587-932a-137eb9feaaec" containerName="extract-utilities" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.291877 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="1807083e-f363-4587-932a-137eb9feaaec" containerName="extract-utilities" Jan 30 07:22:19 crc kubenswrapper[4520]: E0130 07:22:19.291892 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1f48882-fba1-44f0-a438-6d24f531e431" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.291898 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1f48882-fba1-44f0-a438-6d24f531e431" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 30 07:22:19 crc kubenswrapper[4520]: E0130 07:22:19.291908 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d36ab1eb-5ad2-44a4-aecb-84095d597995" containerName="extract-content" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.291914 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="d36ab1eb-5ad2-44a4-aecb-84095d597995" containerName="extract-content" Jan 30 07:22:19 crc kubenswrapper[4520]: E0130 07:22:19.291927 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1807083e-f363-4587-932a-137eb9feaaec" containerName="registry-server" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.291945 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="1807083e-f363-4587-932a-137eb9feaaec" containerName="registry-server" Jan 30 07:22:19 crc kubenswrapper[4520]: E0130 07:22:19.291967 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1807083e-f363-4587-932a-137eb9feaaec" containerName="extract-content" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.291972 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="1807083e-f363-4587-932a-137eb9feaaec" containerName="extract-content" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.292197 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1f48882-fba1-44f0-a438-6d24f531e431" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.292221 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="d36ab1eb-5ad2-44a4-aecb-84095d597995" containerName="registry-server" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.292232 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="1807083e-f363-4587-932a-137eb9feaaec" containerName="registry-server" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.293054 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.296680 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.296848 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.296980 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r7s58" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.298111 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.299858 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v"] Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.300460 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.379123 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b059ed79-ce87-4d24-9774-056d1f97d64a-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v\" (UID: \"b059ed79-ce87-4d24-9774-056d1f97d64a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.379191 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b059ed79-ce87-4d24-9774-056d1f97d64a-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v\" (UID: \"b059ed79-ce87-4d24-9774-056d1f97d64a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.379247 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b059ed79-ce87-4d24-9774-056d1f97d64a-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v\" (UID: \"b059ed79-ce87-4d24-9774-056d1f97d64a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.379273 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b059ed79-ce87-4d24-9774-056d1f97d64a-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v\" (UID: \"b059ed79-ce87-4d24-9774-056d1f97d64a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.379331 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwtsg\" (UniqueName: \"kubernetes.io/projected/b059ed79-ce87-4d24-9774-056d1f97d64a-kube-api-access-hwtsg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v\" (UID: \"b059ed79-ce87-4d24-9774-056d1f97d64a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.379378 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b059ed79-ce87-4d24-9774-056d1f97d64a-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v\" (UID: \"b059ed79-ce87-4d24-9774-056d1f97d64a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.379406 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b059ed79-ce87-4d24-9774-056d1f97d64a-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v\" (UID: \"b059ed79-ce87-4d24-9774-056d1f97d64a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.481299 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b059ed79-ce87-4d24-9774-056d1f97d64a-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v\" (UID: \"b059ed79-ce87-4d24-9774-056d1f97d64a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.481361 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b059ed79-ce87-4d24-9774-056d1f97d64a-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v\" (UID: \"b059ed79-ce87-4d24-9774-056d1f97d64a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.481419 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b059ed79-ce87-4d24-9774-056d1f97d64a-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v\" (UID: \"b059ed79-ce87-4d24-9774-056d1f97d64a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.481952 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b059ed79-ce87-4d24-9774-056d1f97d64a-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v\" (UID: \"b059ed79-ce87-4d24-9774-056d1f97d64a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.482021 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b059ed79-ce87-4d24-9774-056d1f97d64a-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v\" (UID: \"b059ed79-ce87-4d24-9774-056d1f97d64a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.482393 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b059ed79-ce87-4d24-9774-056d1f97d64a-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v\" (UID: \"b059ed79-ce87-4d24-9774-056d1f97d64a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.482469 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwtsg\" (UniqueName: \"kubernetes.io/projected/b059ed79-ce87-4d24-9774-056d1f97d64a-kube-api-access-hwtsg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v\" (UID: \"b059ed79-ce87-4d24-9774-056d1f97d64a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.485929 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b059ed79-ce87-4d24-9774-056d1f97d64a-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v\" (UID: \"b059ed79-ce87-4d24-9774-056d1f97d64a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.485932 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b059ed79-ce87-4d24-9774-056d1f97d64a-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v\" (UID: \"b059ed79-ce87-4d24-9774-056d1f97d64a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.486313 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b059ed79-ce87-4d24-9774-056d1f97d64a-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v\" (UID: \"b059ed79-ce87-4d24-9774-056d1f97d64a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.486566 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b059ed79-ce87-4d24-9774-056d1f97d64a-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v\" (UID: \"b059ed79-ce87-4d24-9774-056d1f97d64a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.486798 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b059ed79-ce87-4d24-9774-056d1f97d64a-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v\" (UID: \"b059ed79-ce87-4d24-9774-056d1f97d64a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.487230 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b059ed79-ce87-4d24-9774-056d1f97d64a-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v\" (UID: \"b059ed79-ce87-4d24-9774-056d1f97d64a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.497114 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwtsg\" (UniqueName: \"kubernetes.io/projected/b059ed79-ce87-4d24-9774-056d1f97d64a-kube-api-access-hwtsg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v\" (UID: \"b059ed79-ce87-4d24-9774-056d1f97d64a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v" Jan 30 07:22:19 crc kubenswrapper[4520]: I0130 07:22:19.606657 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v" Jan 30 07:22:20 crc kubenswrapper[4520]: I0130 07:22:20.245696 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v"] Jan 30 07:22:21 crc kubenswrapper[4520]: I0130 07:22:21.226207 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v" event={"ID":"b059ed79-ce87-4d24-9774-056d1f97d64a","Type":"ContainerStarted","Data":"e5c112d60b17f5a296526316afef13079c5da348d13b28b18f0f5347bdf85f6c"} Jan 30 07:22:21 crc kubenswrapper[4520]: I0130 07:22:21.226731 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v" event={"ID":"b059ed79-ce87-4d24-9774-056d1f97d64a","Type":"ContainerStarted","Data":"83e05aacf2f98b6264bf4c4ada6bf78ce0ccd39c2a72b64aa7fd85d566198a63"} Jan 30 07:23:19 crc kubenswrapper[4520]: I0130 07:23:19.440264 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v" podStartSLOduration=59.826458254 podStartE2EDuration="1m0.440248413s" podCreationTimestamp="2026-01-30 07:22:19 +0000 UTC" firstStartedPulling="2026-01-30 07:22:20.251753878 +0000 UTC m=+2253.880106059" lastFinishedPulling="2026-01-30 07:22:20.865544038 +0000 UTC m=+2254.493896218" observedRunningTime="2026-01-30 07:22:21.240659898 +0000 UTC m=+2254.869012078" watchObservedRunningTime="2026-01-30 07:23:19.440248413 +0000 UTC m=+2313.068600594" Jan 30 07:23:19 crc kubenswrapper[4520]: I0130 07:23:19.444385 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vkjg9"] Jan 30 07:23:19 crc kubenswrapper[4520]: I0130 07:23:19.447307 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vkjg9" Jan 30 07:23:19 crc kubenswrapper[4520]: I0130 07:23:19.471257 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vkjg9"] Jan 30 07:23:19 crc kubenswrapper[4520]: I0130 07:23:19.579729 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs5m8\" (UniqueName: \"kubernetes.io/projected/ee26e987-a102-48cf-b541-1188c31decce-kube-api-access-cs5m8\") pod \"redhat-operators-vkjg9\" (UID: \"ee26e987-a102-48cf-b541-1188c31decce\") " pod="openshift-marketplace/redhat-operators-vkjg9" Jan 30 07:23:19 crc kubenswrapper[4520]: I0130 07:23:19.579824 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee26e987-a102-48cf-b541-1188c31decce-utilities\") pod \"redhat-operators-vkjg9\" (UID: \"ee26e987-a102-48cf-b541-1188c31decce\") " pod="openshift-marketplace/redhat-operators-vkjg9" Jan 30 07:23:19 crc kubenswrapper[4520]: I0130 07:23:19.579859 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee26e987-a102-48cf-b541-1188c31decce-catalog-content\") pod \"redhat-operators-vkjg9\" (UID: \"ee26e987-a102-48cf-b541-1188c31decce\") " pod="openshift-marketplace/redhat-operators-vkjg9" Jan 30 07:23:19 crc kubenswrapper[4520]: I0130 07:23:19.682303 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee26e987-a102-48cf-b541-1188c31decce-catalog-content\") pod \"redhat-operators-vkjg9\" (UID: \"ee26e987-a102-48cf-b541-1188c31decce\") " pod="openshift-marketplace/redhat-operators-vkjg9" Jan 30 07:23:19 crc kubenswrapper[4520]: I0130 07:23:19.682478 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs5m8\" (UniqueName: \"kubernetes.io/projected/ee26e987-a102-48cf-b541-1188c31decce-kube-api-access-cs5m8\") pod \"redhat-operators-vkjg9\" (UID: \"ee26e987-a102-48cf-b541-1188c31decce\") " pod="openshift-marketplace/redhat-operators-vkjg9" Jan 30 07:23:19 crc kubenswrapper[4520]: I0130 07:23:19.682503 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee26e987-a102-48cf-b541-1188c31decce-utilities\") pod \"redhat-operators-vkjg9\" (UID: \"ee26e987-a102-48cf-b541-1188c31decce\") " pod="openshift-marketplace/redhat-operators-vkjg9" Jan 30 07:23:19 crc kubenswrapper[4520]: I0130 07:23:19.683179 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee26e987-a102-48cf-b541-1188c31decce-catalog-content\") pod \"redhat-operators-vkjg9\" (UID: \"ee26e987-a102-48cf-b541-1188c31decce\") " pod="openshift-marketplace/redhat-operators-vkjg9" Jan 30 07:23:19 crc kubenswrapper[4520]: I0130 07:23:19.683238 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee26e987-a102-48cf-b541-1188c31decce-utilities\") pod \"redhat-operators-vkjg9\" (UID: \"ee26e987-a102-48cf-b541-1188c31decce\") " pod="openshift-marketplace/redhat-operators-vkjg9" Jan 30 07:23:19 crc kubenswrapper[4520]: I0130 07:23:19.706359 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs5m8\" (UniqueName: \"kubernetes.io/projected/ee26e987-a102-48cf-b541-1188c31decce-kube-api-access-cs5m8\") pod \"redhat-operators-vkjg9\" (UID: \"ee26e987-a102-48cf-b541-1188c31decce\") " pod="openshift-marketplace/redhat-operators-vkjg9" Jan 30 07:23:19 crc kubenswrapper[4520]: I0130 07:23:19.770341 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vkjg9" Jan 30 07:23:20 crc kubenswrapper[4520]: I0130 07:23:20.189702 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vkjg9"] Jan 30 07:23:20 crc kubenswrapper[4520]: I0130 07:23:20.613087 4520 generic.go:334] "Generic (PLEG): container finished" podID="ee26e987-a102-48cf-b541-1188c31decce" containerID="478131873eaa7d815c0bf79694cb12b45bef8f104669e6568e2a33fc69f9b218" exitCode=0 Jan 30 07:23:20 crc kubenswrapper[4520]: I0130 07:23:20.613126 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkjg9" event={"ID":"ee26e987-a102-48cf-b541-1188c31decce","Type":"ContainerDied","Data":"478131873eaa7d815c0bf79694cb12b45bef8f104669e6568e2a33fc69f9b218"} Jan 30 07:23:20 crc kubenswrapper[4520]: I0130 07:23:20.613157 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkjg9" event={"ID":"ee26e987-a102-48cf-b541-1188c31decce","Type":"ContainerStarted","Data":"258bd008a644e75bb7bf9c3ed0f7612c419190d983f45bdcae87309b1e9e4074"} Jan 30 07:23:21 crc kubenswrapper[4520]: I0130 07:23:21.631407 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkjg9" event={"ID":"ee26e987-a102-48cf-b541-1188c31decce","Type":"ContainerStarted","Data":"66fb8f12b6a24f4f732608a6cf9b956cead6d86f1be6ac6f4926a1848bee160c"} Jan 30 07:23:24 crc kubenswrapper[4520]: I0130 07:23:24.658104 4520 generic.go:334] "Generic (PLEG): container finished" podID="ee26e987-a102-48cf-b541-1188c31decce" containerID="66fb8f12b6a24f4f732608a6cf9b956cead6d86f1be6ac6f4926a1848bee160c" exitCode=0 Jan 30 07:23:24 crc kubenswrapper[4520]: I0130 07:23:24.658320 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkjg9" event={"ID":"ee26e987-a102-48cf-b541-1188c31decce","Type":"ContainerDied","Data":"66fb8f12b6a24f4f732608a6cf9b956cead6d86f1be6ac6f4926a1848bee160c"} Jan 30 07:23:25 crc kubenswrapper[4520]: I0130 07:23:25.671560 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkjg9" event={"ID":"ee26e987-a102-48cf-b541-1188c31decce","Type":"ContainerStarted","Data":"c9bac8d96eb7e6d44184cc5d37d6ade330446dea1934903bee79bcaac31e8d67"} Jan 30 07:23:25 crc kubenswrapper[4520]: I0130 07:23:25.693470 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vkjg9" podStartSLOduration=2.091303188 podStartE2EDuration="6.693452506s" podCreationTimestamp="2026-01-30 07:23:19 +0000 UTC" firstStartedPulling="2026-01-30 07:23:20.614718754 +0000 UTC m=+2314.243070936" lastFinishedPulling="2026-01-30 07:23:25.216868083 +0000 UTC m=+2318.845220254" observedRunningTime="2026-01-30 07:23:25.689926186 +0000 UTC m=+2319.318278366" watchObservedRunningTime="2026-01-30 07:23:25.693452506 +0000 UTC m=+2319.321804687" Jan 30 07:23:29 crc kubenswrapper[4520]: I0130 07:23:29.770683 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vkjg9" Jan 30 07:23:29 crc kubenswrapper[4520]: I0130 07:23:29.771090 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vkjg9" Jan 30 07:23:30 crc kubenswrapper[4520]: I0130 07:23:30.803892 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vkjg9" podUID="ee26e987-a102-48cf-b541-1188c31decce" containerName="registry-server" probeResult="failure" output=< Jan 30 07:23:30 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 07:23:30 crc kubenswrapper[4520]: > Jan 30 07:23:39 crc kubenswrapper[4520]: I0130 07:23:39.802296 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vkjg9" Jan 30 07:23:39 crc kubenswrapper[4520]: I0130 07:23:39.856084 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vkjg9" Jan 30 07:23:40 crc kubenswrapper[4520]: I0130 07:23:40.031266 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vkjg9"] Jan 30 07:23:41 crc kubenswrapper[4520]: I0130 07:23:41.770913 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vkjg9" podUID="ee26e987-a102-48cf-b541-1188c31decce" containerName="registry-server" containerID="cri-o://c9bac8d96eb7e6d44184cc5d37d6ade330446dea1934903bee79bcaac31e8d67" gracePeriod=2 Jan 30 07:23:42 crc kubenswrapper[4520]: I0130 07:23:42.168825 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vkjg9" Jan 30 07:23:42 crc kubenswrapper[4520]: I0130 07:23:42.270571 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee26e987-a102-48cf-b541-1188c31decce-catalog-content\") pod \"ee26e987-a102-48cf-b541-1188c31decce\" (UID: \"ee26e987-a102-48cf-b541-1188c31decce\") " Jan 30 07:23:42 crc kubenswrapper[4520]: I0130 07:23:42.270654 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cs5m8\" (UniqueName: \"kubernetes.io/projected/ee26e987-a102-48cf-b541-1188c31decce-kube-api-access-cs5m8\") pod \"ee26e987-a102-48cf-b541-1188c31decce\" (UID: \"ee26e987-a102-48cf-b541-1188c31decce\") " Jan 30 07:23:42 crc kubenswrapper[4520]: I0130 07:23:42.270795 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee26e987-a102-48cf-b541-1188c31decce-utilities\") pod \"ee26e987-a102-48cf-b541-1188c31decce\" (UID: \"ee26e987-a102-48cf-b541-1188c31decce\") " Jan 30 07:23:42 crc kubenswrapper[4520]: I0130 07:23:42.271366 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee26e987-a102-48cf-b541-1188c31decce-utilities" (OuterVolumeSpecName: "utilities") pod "ee26e987-a102-48cf-b541-1188c31decce" (UID: "ee26e987-a102-48cf-b541-1188c31decce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:23:42 crc kubenswrapper[4520]: I0130 07:23:42.278694 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee26e987-a102-48cf-b541-1188c31decce-kube-api-access-cs5m8" (OuterVolumeSpecName: "kube-api-access-cs5m8") pod "ee26e987-a102-48cf-b541-1188c31decce" (UID: "ee26e987-a102-48cf-b541-1188c31decce"). InnerVolumeSpecName "kube-api-access-cs5m8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:23:42 crc kubenswrapper[4520]: I0130 07:23:42.364700 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee26e987-a102-48cf-b541-1188c31decce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee26e987-a102-48cf-b541-1188c31decce" (UID: "ee26e987-a102-48cf-b541-1188c31decce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:23:42 crc kubenswrapper[4520]: I0130 07:23:42.374382 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cs5m8\" (UniqueName: \"kubernetes.io/projected/ee26e987-a102-48cf-b541-1188c31decce-kube-api-access-cs5m8\") on node \"crc\" DevicePath \"\"" Jan 30 07:23:42 crc kubenswrapper[4520]: I0130 07:23:42.374414 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee26e987-a102-48cf-b541-1188c31decce-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 07:23:42 crc kubenswrapper[4520]: I0130 07:23:42.374424 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee26e987-a102-48cf-b541-1188c31decce-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 07:23:42 crc kubenswrapper[4520]: I0130 07:23:42.780735 4520 generic.go:334] "Generic (PLEG): container finished" podID="ee26e987-a102-48cf-b541-1188c31decce" containerID="c9bac8d96eb7e6d44184cc5d37d6ade330446dea1934903bee79bcaac31e8d67" exitCode=0 Jan 30 07:23:42 crc kubenswrapper[4520]: I0130 07:23:42.780799 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkjg9" event={"ID":"ee26e987-a102-48cf-b541-1188c31decce","Type":"ContainerDied","Data":"c9bac8d96eb7e6d44184cc5d37d6ade330446dea1934903bee79bcaac31e8d67"} Jan 30 07:23:42 crc kubenswrapper[4520]: I0130 07:23:42.780837 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkjg9" event={"ID":"ee26e987-a102-48cf-b541-1188c31decce","Type":"ContainerDied","Data":"258bd008a644e75bb7bf9c3ed0f7612c419190d983f45bdcae87309b1e9e4074"} Jan 30 07:23:42 crc kubenswrapper[4520]: I0130 07:23:42.780860 4520 scope.go:117] "RemoveContainer" containerID="c9bac8d96eb7e6d44184cc5d37d6ade330446dea1934903bee79bcaac31e8d67" Jan 30 07:23:42 crc kubenswrapper[4520]: I0130 07:23:42.781043 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vkjg9" Jan 30 07:23:42 crc kubenswrapper[4520]: I0130 07:23:42.800578 4520 scope.go:117] "RemoveContainer" containerID="66fb8f12b6a24f4f732608a6cf9b956cead6d86f1be6ac6f4926a1848bee160c" Jan 30 07:23:42 crc kubenswrapper[4520]: I0130 07:23:42.801220 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vkjg9"] Jan 30 07:23:42 crc kubenswrapper[4520]: I0130 07:23:42.810489 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vkjg9"] Jan 30 07:23:42 crc kubenswrapper[4520]: I0130 07:23:42.817858 4520 scope.go:117] "RemoveContainer" containerID="478131873eaa7d815c0bf79694cb12b45bef8f104669e6568e2a33fc69f9b218" Jan 30 07:23:42 crc kubenswrapper[4520]: I0130 07:23:42.854202 4520 scope.go:117] "RemoveContainer" containerID="c9bac8d96eb7e6d44184cc5d37d6ade330446dea1934903bee79bcaac31e8d67" Jan 30 07:23:42 crc kubenswrapper[4520]: E0130 07:23:42.854629 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9bac8d96eb7e6d44184cc5d37d6ade330446dea1934903bee79bcaac31e8d67\": container with ID starting with c9bac8d96eb7e6d44184cc5d37d6ade330446dea1934903bee79bcaac31e8d67 not found: ID does not exist" containerID="c9bac8d96eb7e6d44184cc5d37d6ade330446dea1934903bee79bcaac31e8d67" Jan 30 07:23:42 crc kubenswrapper[4520]: I0130 07:23:42.854662 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9bac8d96eb7e6d44184cc5d37d6ade330446dea1934903bee79bcaac31e8d67"} err="failed to get container status \"c9bac8d96eb7e6d44184cc5d37d6ade330446dea1934903bee79bcaac31e8d67\": rpc error: code = NotFound desc = could not find container \"c9bac8d96eb7e6d44184cc5d37d6ade330446dea1934903bee79bcaac31e8d67\": container with ID starting with c9bac8d96eb7e6d44184cc5d37d6ade330446dea1934903bee79bcaac31e8d67 not found: ID does not exist" Jan 30 07:23:42 crc kubenswrapper[4520]: I0130 07:23:42.854687 4520 scope.go:117] "RemoveContainer" containerID="66fb8f12b6a24f4f732608a6cf9b956cead6d86f1be6ac6f4926a1848bee160c" Jan 30 07:23:42 crc kubenswrapper[4520]: E0130 07:23:42.854969 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66fb8f12b6a24f4f732608a6cf9b956cead6d86f1be6ac6f4926a1848bee160c\": container with ID starting with 66fb8f12b6a24f4f732608a6cf9b956cead6d86f1be6ac6f4926a1848bee160c not found: ID does not exist" containerID="66fb8f12b6a24f4f732608a6cf9b956cead6d86f1be6ac6f4926a1848bee160c" Jan 30 07:23:42 crc kubenswrapper[4520]: I0130 07:23:42.855051 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66fb8f12b6a24f4f732608a6cf9b956cead6d86f1be6ac6f4926a1848bee160c"} err="failed to get container status \"66fb8f12b6a24f4f732608a6cf9b956cead6d86f1be6ac6f4926a1848bee160c\": rpc error: code = NotFound desc = could not find container \"66fb8f12b6a24f4f732608a6cf9b956cead6d86f1be6ac6f4926a1848bee160c\": container with ID starting with 66fb8f12b6a24f4f732608a6cf9b956cead6d86f1be6ac6f4926a1848bee160c not found: ID does not exist" Jan 30 07:23:42 crc kubenswrapper[4520]: I0130 07:23:42.855128 4520 scope.go:117] "RemoveContainer" containerID="478131873eaa7d815c0bf79694cb12b45bef8f104669e6568e2a33fc69f9b218" Jan 30 07:23:42 crc kubenswrapper[4520]: E0130 07:23:42.855556 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"478131873eaa7d815c0bf79694cb12b45bef8f104669e6568e2a33fc69f9b218\": container with ID starting with 478131873eaa7d815c0bf79694cb12b45bef8f104669e6568e2a33fc69f9b218 not found: ID does not exist" containerID="478131873eaa7d815c0bf79694cb12b45bef8f104669e6568e2a33fc69f9b218" Jan 30 07:23:42 crc kubenswrapper[4520]: I0130 07:23:42.855588 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"478131873eaa7d815c0bf79694cb12b45bef8f104669e6568e2a33fc69f9b218"} err="failed to get container status \"478131873eaa7d815c0bf79694cb12b45bef8f104669e6568e2a33fc69f9b218\": rpc error: code = NotFound desc = could not find container \"478131873eaa7d815c0bf79694cb12b45bef8f104669e6568e2a33fc69f9b218\": container with ID starting with 478131873eaa7d815c0bf79694cb12b45bef8f104669e6568e2a33fc69f9b218 not found: ID does not exist" Jan 30 07:23:44 crc kubenswrapper[4520]: I0130 07:23:44.705380 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee26e987-a102-48cf-b541-1188c31decce" path="/var/lib/kubelet/pods/ee26e987-a102-48cf-b541-1188c31decce/volumes" Jan 30 07:23:57 crc kubenswrapper[4520]: I0130 07:23:57.793210 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 07:23:57 crc kubenswrapper[4520]: I0130 07:23:57.793660 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 07:24:23 crc kubenswrapper[4520]: I0130 07:24:23.049201 4520 generic.go:334] "Generic (PLEG): container finished" podID="b059ed79-ce87-4d24-9774-056d1f97d64a" containerID="e5c112d60b17f5a296526316afef13079c5da348d13b28b18f0f5347bdf85f6c" exitCode=0 Jan 30 07:24:23 crc kubenswrapper[4520]: I0130 07:24:23.049296 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v" event={"ID":"b059ed79-ce87-4d24-9774-056d1f97d64a","Type":"ContainerDied","Data":"e5c112d60b17f5a296526316afef13079c5da348d13b28b18f0f5347bdf85f6c"} Jan 30 07:24:24 crc kubenswrapper[4520]: I0130 07:24:24.364760 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v" Jan 30 07:24:24 crc kubenswrapper[4520]: I0130 07:24:24.389870 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b059ed79-ce87-4d24-9774-056d1f97d64a-telemetry-combined-ca-bundle\") pod \"b059ed79-ce87-4d24-9774-056d1f97d64a\" (UID: \"b059ed79-ce87-4d24-9774-056d1f97d64a\") " Jan 30 07:24:24 crc kubenswrapper[4520]: I0130 07:24:24.389909 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b059ed79-ce87-4d24-9774-056d1f97d64a-ssh-key-openstack-edpm-ipam\") pod \"b059ed79-ce87-4d24-9774-056d1f97d64a\" (UID: \"b059ed79-ce87-4d24-9774-056d1f97d64a\") " Jan 30 07:24:24 crc kubenswrapper[4520]: I0130 07:24:24.389986 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b059ed79-ce87-4d24-9774-056d1f97d64a-ceilometer-compute-config-data-1\") pod \"b059ed79-ce87-4d24-9774-056d1f97d64a\" (UID: \"b059ed79-ce87-4d24-9774-056d1f97d64a\") " Jan 30 07:24:24 crc kubenswrapper[4520]: I0130 07:24:24.390058 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b059ed79-ce87-4d24-9774-056d1f97d64a-ceilometer-compute-config-data-0\") pod \"b059ed79-ce87-4d24-9774-056d1f97d64a\" (UID: \"b059ed79-ce87-4d24-9774-056d1f97d64a\") " Jan 30 07:24:24 crc kubenswrapper[4520]: I0130 07:24:24.390081 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b059ed79-ce87-4d24-9774-056d1f97d64a-inventory\") pod \"b059ed79-ce87-4d24-9774-056d1f97d64a\" (UID: \"b059ed79-ce87-4d24-9774-056d1f97d64a\") " Jan 30 07:24:24 crc kubenswrapper[4520]: I0130 07:24:24.401754 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b059ed79-ce87-4d24-9774-056d1f97d64a-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "b059ed79-ce87-4d24-9774-056d1f97d64a" (UID: "b059ed79-ce87-4d24-9774-056d1f97d64a"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:24:24 crc kubenswrapper[4520]: I0130 07:24:24.412648 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b059ed79-ce87-4d24-9774-056d1f97d64a-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "b059ed79-ce87-4d24-9774-056d1f97d64a" (UID: "b059ed79-ce87-4d24-9774-056d1f97d64a"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:24:24 crc kubenswrapper[4520]: I0130 07:24:24.425260 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b059ed79-ce87-4d24-9774-056d1f97d64a-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "b059ed79-ce87-4d24-9774-056d1f97d64a" (UID: "b059ed79-ce87-4d24-9774-056d1f97d64a"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:24:24 crc kubenswrapper[4520]: I0130 07:24:24.425654 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b059ed79-ce87-4d24-9774-056d1f97d64a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b059ed79-ce87-4d24-9774-056d1f97d64a" (UID: "b059ed79-ce87-4d24-9774-056d1f97d64a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:24:24 crc kubenswrapper[4520]: I0130 07:24:24.436712 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b059ed79-ce87-4d24-9774-056d1f97d64a-inventory" (OuterVolumeSpecName: "inventory") pod "b059ed79-ce87-4d24-9774-056d1f97d64a" (UID: "b059ed79-ce87-4d24-9774-056d1f97d64a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:24:24 crc kubenswrapper[4520]: I0130 07:24:24.491957 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwtsg\" (UniqueName: \"kubernetes.io/projected/b059ed79-ce87-4d24-9774-056d1f97d64a-kube-api-access-hwtsg\") pod \"b059ed79-ce87-4d24-9774-056d1f97d64a\" (UID: \"b059ed79-ce87-4d24-9774-056d1f97d64a\") " Jan 30 07:24:24 crc kubenswrapper[4520]: I0130 07:24:24.492079 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b059ed79-ce87-4d24-9774-056d1f97d64a-ceilometer-compute-config-data-2\") pod \"b059ed79-ce87-4d24-9774-056d1f97d64a\" (UID: \"b059ed79-ce87-4d24-9774-056d1f97d64a\") " Jan 30 07:24:24 crc kubenswrapper[4520]: I0130 07:24:24.492598 4520 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b059ed79-ce87-4d24-9774-056d1f97d64a-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:24:24 crc kubenswrapper[4520]: I0130 07:24:24.492613 4520 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b059ed79-ce87-4d24-9774-056d1f97d64a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 07:24:24 crc kubenswrapper[4520]: I0130 07:24:24.492623 4520 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b059ed79-ce87-4d24-9774-056d1f97d64a-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 30 07:24:24 crc kubenswrapper[4520]: I0130 07:24:24.492632 4520 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b059ed79-ce87-4d24-9774-056d1f97d64a-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 30 07:24:24 crc kubenswrapper[4520]: I0130 07:24:24.492642 4520 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b059ed79-ce87-4d24-9774-056d1f97d64a-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 07:24:24 crc kubenswrapper[4520]: I0130 07:24:24.494675 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b059ed79-ce87-4d24-9774-056d1f97d64a-kube-api-access-hwtsg" (OuterVolumeSpecName: "kube-api-access-hwtsg") pod "b059ed79-ce87-4d24-9774-056d1f97d64a" (UID: "b059ed79-ce87-4d24-9774-056d1f97d64a"). InnerVolumeSpecName "kube-api-access-hwtsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:24:24 crc kubenswrapper[4520]: I0130 07:24:24.513269 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b059ed79-ce87-4d24-9774-056d1f97d64a-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "b059ed79-ce87-4d24-9774-056d1f97d64a" (UID: "b059ed79-ce87-4d24-9774-056d1f97d64a"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:24:24 crc kubenswrapper[4520]: I0130 07:24:24.593626 4520 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b059ed79-ce87-4d24-9774-056d1f97d64a-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 30 07:24:24 crc kubenswrapper[4520]: I0130 07:24:24.593649 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwtsg\" (UniqueName: \"kubernetes.io/projected/b059ed79-ce87-4d24-9774-056d1f97d64a-kube-api-access-hwtsg\") on node \"crc\" DevicePath \"\"" Jan 30 07:24:25 crc kubenswrapper[4520]: I0130 07:24:25.065410 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v" event={"ID":"b059ed79-ce87-4d24-9774-056d1f97d64a","Type":"ContainerDied","Data":"83e05aacf2f98b6264bf4c4ada6bf78ce0ccd39c2a72b64aa7fd85d566198a63"} Jan 30 07:24:25 crc kubenswrapper[4520]: I0130 07:24:25.065452 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83e05aacf2f98b6264bf4c4ada6bf78ce0ccd39c2a72b64aa7fd85d566198a63" Jan 30 07:24:25 crc kubenswrapper[4520]: I0130 07:24:25.065733 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lzt9v" Jan 30 07:24:27 crc kubenswrapper[4520]: I0130 07:24:27.793119 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 07:24:27 crc kubenswrapper[4520]: I0130 07:24:27.793533 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 07:24:31 crc kubenswrapper[4520]: I0130 07:24:31.756252 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h88t5"] Jan 30 07:24:31 crc kubenswrapper[4520]: E0130 07:24:31.757463 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee26e987-a102-48cf-b541-1188c31decce" containerName="extract-content" Jan 30 07:24:31 crc kubenswrapper[4520]: I0130 07:24:31.757555 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee26e987-a102-48cf-b541-1188c31decce" containerName="extract-content" Jan 30 07:24:31 crc kubenswrapper[4520]: E0130 07:24:31.757611 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee26e987-a102-48cf-b541-1188c31decce" containerName="extract-utilities" Jan 30 07:24:31 crc kubenswrapper[4520]: I0130 07:24:31.757681 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee26e987-a102-48cf-b541-1188c31decce" containerName="extract-utilities" Jan 30 07:24:31 crc kubenswrapper[4520]: E0130 07:24:31.757744 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee26e987-a102-48cf-b541-1188c31decce" containerName="registry-server" Jan 30 07:24:31 crc kubenswrapper[4520]: I0130 07:24:31.757805 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee26e987-a102-48cf-b541-1188c31decce" containerName="registry-server" Jan 30 07:24:31 crc kubenswrapper[4520]: E0130 07:24:31.757874 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b059ed79-ce87-4d24-9774-056d1f97d64a" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 30 07:24:31 crc kubenswrapper[4520]: I0130 07:24:31.757920 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="b059ed79-ce87-4d24-9774-056d1f97d64a" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 30 07:24:31 crc kubenswrapper[4520]: I0130 07:24:31.758144 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="b059ed79-ce87-4d24-9774-056d1f97d64a" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 30 07:24:31 crc kubenswrapper[4520]: I0130 07:24:31.758228 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee26e987-a102-48cf-b541-1188c31decce" containerName="registry-server" Jan 30 07:24:31 crc kubenswrapper[4520]: I0130 07:24:31.759384 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h88t5" Jan 30 07:24:31 crc kubenswrapper[4520]: I0130 07:24:31.768128 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h88t5"] Jan 30 07:24:31 crc kubenswrapper[4520]: I0130 07:24:31.823154 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cee0da0c-55b6-453e-a09f-e53c0f54f530-utilities\") pod \"certified-operators-h88t5\" (UID: \"cee0da0c-55b6-453e-a09f-e53c0f54f530\") " pod="openshift-marketplace/certified-operators-h88t5" Jan 30 07:24:31 crc kubenswrapper[4520]: I0130 07:24:31.823194 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cee0da0c-55b6-453e-a09f-e53c0f54f530-catalog-content\") pod \"certified-operators-h88t5\" (UID: \"cee0da0c-55b6-453e-a09f-e53c0f54f530\") " pod="openshift-marketplace/certified-operators-h88t5" Jan 30 07:24:31 crc kubenswrapper[4520]: I0130 07:24:31.823270 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvvdv\" (UniqueName: \"kubernetes.io/projected/cee0da0c-55b6-453e-a09f-e53c0f54f530-kube-api-access-lvvdv\") pod \"certified-operators-h88t5\" (UID: \"cee0da0c-55b6-453e-a09f-e53c0f54f530\") " pod="openshift-marketplace/certified-operators-h88t5" Jan 30 07:24:31 crc kubenswrapper[4520]: I0130 07:24:31.925093 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvvdv\" (UniqueName: \"kubernetes.io/projected/cee0da0c-55b6-453e-a09f-e53c0f54f530-kube-api-access-lvvdv\") pod \"certified-operators-h88t5\" (UID: \"cee0da0c-55b6-453e-a09f-e53c0f54f530\") " pod="openshift-marketplace/certified-operators-h88t5" Jan 30 07:24:31 crc kubenswrapper[4520]: I0130 07:24:31.925346 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cee0da0c-55b6-453e-a09f-e53c0f54f530-utilities\") pod \"certified-operators-h88t5\" (UID: \"cee0da0c-55b6-453e-a09f-e53c0f54f530\") " pod="openshift-marketplace/certified-operators-h88t5" Jan 30 07:24:31 crc kubenswrapper[4520]: I0130 07:24:31.925383 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cee0da0c-55b6-453e-a09f-e53c0f54f530-catalog-content\") pod \"certified-operators-h88t5\" (UID: \"cee0da0c-55b6-453e-a09f-e53c0f54f530\") " pod="openshift-marketplace/certified-operators-h88t5" Jan 30 07:24:31 crc kubenswrapper[4520]: I0130 07:24:31.925827 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cee0da0c-55b6-453e-a09f-e53c0f54f530-utilities\") pod \"certified-operators-h88t5\" (UID: \"cee0da0c-55b6-453e-a09f-e53c0f54f530\") " pod="openshift-marketplace/certified-operators-h88t5" Jan 30 07:24:31 crc kubenswrapper[4520]: I0130 07:24:31.926007 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cee0da0c-55b6-453e-a09f-e53c0f54f530-catalog-content\") pod \"certified-operators-h88t5\" (UID: \"cee0da0c-55b6-453e-a09f-e53c0f54f530\") " pod="openshift-marketplace/certified-operators-h88t5" Jan 30 07:24:31 crc kubenswrapper[4520]: I0130 07:24:31.940155 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvvdv\" (UniqueName: \"kubernetes.io/projected/cee0da0c-55b6-453e-a09f-e53c0f54f530-kube-api-access-lvvdv\") pod \"certified-operators-h88t5\" (UID: \"cee0da0c-55b6-453e-a09f-e53c0f54f530\") " pod="openshift-marketplace/certified-operators-h88t5" Jan 30 07:24:32 crc kubenswrapper[4520]: I0130 07:24:32.074087 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h88t5" Jan 30 07:24:32 crc kubenswrapper[4520]: I0130 07:24:32.726061 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h88t5"] Jan 30 07:24:33 crc kubenswrapper[4520]: I0130 07:24:33.117969 4520 generic.go:334] "Generic (PLEG): container finished" podID="cee0da0c-55b6-453e-a09f-e53c0f54f530" containerID="59fc42d8bcad1b590a0c50547a37e195b5c39592d719cbf12b4346b48a8c0c43" exitCode=0 Jan 30 07:24:33 crc kubenswrapper[4520]: I0130 07:24:33.118063 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h88t5" event={"ID":"cee0da0c-55b6-453e-a09f-e53c0f54f530","Type":"ContainerDied","Data":"59fc42d8bcad1b590a0c50547a37e195b5c39592d719cbf12b4346b48a8c0c43"} Jan 30 07:24:33 crc kubenswrapper[4520]: I0130 07:24:33.119357 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h88t5" event={"ID":"cee0da0c-55b6-453e-a09f-e53c0f54f530","Type":"ContainerStarted","Data":"047b28dbfd3563e358d4806bdc44f734886735d9d4f81158cb845d0c48444818"} Jan 30 07:24:34 crc kubenswrapper[4520]: I0130 07:24:34.127438 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h88t5" event={"ID":"cee0da0c-55b6-453e-a09f-e53c0f54f530","Type":"ContainerStarted","Data":"ae9ac997ae319608c061d736adf4e451d0048fcee5d0d43815d5e45fcd60145c"} Jan 30 07:24:35 crc kubenswrapper[4520]: I0130 07:24:35.136476 4520 generic.go:334] "Generic (PLEG): container finished" podID="cee0da0c-55b6-453e-a09f-e53c0f54f530" containerID="ae9ac997ae319608c061d736adf4e451d0048fcee5d0d43815d5e45fcd60145c" exitCode=0 Jan 30 07:24:35 crc kubenswrapper[4520]: I0130 07:24:35.136559 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h88t5" event={"ID":"cee0da0c-55b6-453e-a09f-e53c0f54f530","Type":"ContainerDied","Data":"ae9ac997ae319608c061d736adf4e451d0048fcee5d0d43815d5e45fcd60145c"} Jan 30 07:24:36 crc kubenswrapper[4520]: I0130 07:24:36.145964 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h88t5" event={"ID":"cee0da0c-55b6-453e-a09f-e53c0f54f530","Type":"ContainerStarted","Data":"fd00cb193ec31343fa1f9205f1ff6699efff6aa20c3752e04d6ba2bc9a565862"} Jan 30 07:24:36 crc kubenswrapper[4520]: I0130 07:24:36.163369 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h88t5" podStartSLOduration=2.645614244 podStartE2EDuration="5.163353124s" podCreationTimestamp="2026-01-30 07:24:31 +0000 UTC" firstStartedPulling="2026-01-30 07:24:33.119663482 +0000 UTC m=+2386.748015653" lastFinishedPulling="2026-01-30 07:24:35.637402353 +0000 UTC m=+2389.265754533" observedRunningTime="2026-01-30 07:24:36.159435118 +0000 UTC m=+2389.787787299" watchObservedRunningTime="2026-01-30 07:24:36.163353124 +0000 UTC m=+2389.791705306" Jan 30 07:24:42 crc kubenswrapper[4520]: I0130 07:24:42.074991 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h88t5" Jan 30 07:24:42 crc kubenswrapper[4520]: I0130 07:24:42.076640 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h88t5" Jan 30 07:24:42 crc kubenswrapper[4520]: I0130 07:24:42.110378 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h88t5" Jan 30 07:24:42 crc kubenswrapper[4520]: I0130 07:24:42.235533 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h88t5" Jan 30 07:24:42 crc kubenswrapper[4520]: I0130 07:24:42.343262 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h88t5"] Jan 30 07:24:44 crc kubenswrapper[4520]: I0130 07:24:44.219461 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h88t5" podUID="cee0da0c-55b6-453e-a09f-e53c0f54f530" containerName="registry-server" containerID="cri-o://fd00cb193ec31343fa1f9205f1ff6699efff6aa20c3752e04d6ba2bc9a565862" gracePeriod=2 Jan 30 07:24:44 crc kubenswrapper[4520]: I0130 07:24:44.626707 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h88t5" Jan 30 07:24:44 crc kubenswrapper[4520]: I0130 07:24:44.776842 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cee0da0c-55b6-453e-a09f-e53c0f54f530-catalog-content\") pod \"cee0da0c-55b6-453e-a09f-e53c0f54f530\" (UID: \"cee0da0c-55b6-453e-a09f-e53c0f54f530\") " Jan 30 07:24:44 crc kubenswrapper[4520]: I0130 07:24:44.777287 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cee0da0c-55b6-453e-a09f-e53c0f54f530-utilities\") pod \"cee0da0c-55b6-453e-a09f-e53c0f54f530\" (UID: \"cee0da0c-55b6-453e-a09f-e53c0f54f530\") " Jan 30 07:24:44 crc kubenswrapper[4520]: I0130 07:24:44.777396 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvvdv\" (UniqueName: \"kubernetes.io/projected/cee0da0c-55b6-453e-a09f-e53c0f54f530-kube-api-access-lvvdv\") pod \"cee0da0c-55b6-453e-a09f-e53c0f54f530\" (UID: \"cee0da0c-55b6-453e-a09f-e53c0f54f530\") " Jan 30 07:24:44 crc kubenswrapper[4520]: I0130 07:24:44.778180 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cee0da0c-55b6-453e-a09f-e53c0f54f530-utilities" (OuterVolumeSpecName: "utilities") pod "cee0da0c-55b6-453e-a09f-e53c0f54f530" (UID: "cee0da0c-55b6-453e-a09f-e53c0f54f530"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:24:44 crc kubenswrapper[4520]: I0130 07:24:44.785159 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cee0da0c-55b6-453e-a09f-e53c0f54f530-kube-api-access-lvvdv" (OuterVolumeSpecName: "kube-api-access-lvvdv") pod "cee0da0c-55b6-453e-a09f-e53c0f54f530" (UID: "cee0da0c-55b6-453e-a09f-e53c0f54f530"). InnerVolumeSpecName "kube-api-access-lvvdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:24:44 crc kubenswrapper[4520]: I0130 07:24:44.814640 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cee0da0c-55b6-453e-a09f-e53c0f54f530-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cee0da0c-55b6-453e-a09f-e53c0f54f530" (UID: "cee0da0c-55b6-453e-a09f-e53c0f54f530"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:24:44 crc kubenswrapper[4520]: I0130 07:24:44.880733 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cee0da0c-55b6-453e-a09f-e53c0f54f530-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 07:24:44 crc kubenswrapper[4520]: I0130 07:24:44.880781 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvvdv\" (UniqueName: \"kubernetes.io/projected/cee0da0c-55b6-453e-a09f-e53c0f54f530-kube-api-access-lvvdv\") on node \"crc\" DevicePath \"\"" Jan 30 07:24:44 crc kubenswrapper[4520]: I0130 07:24:44.880798 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cee0da0c-55b6-453e-a09f-e53c0f54f530-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 07:24:45 crc kubenswrapper[4520]: I0130 07:24:45.234267 4520 generic.go:334] "Generic (PLEG): container finished" podID="cee0da0c-55b6-453e-a09f-e53c0f54f530" containerID="fd00cb193ec31343fa1f9205f1ff6699efff6aa20c3752e04d6ba2bc9a565862" exitCode=0 Jan 30 07:24:45 crc kubenswrapper[4520]: I0130 07:24:45.234341 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h88t5" event={"ID":"cee0da0c-55b6-453e-a09f-e53c0f54f530","Type":"ContainerDied","Data":"fd00cb193ec31343fa1f9205f1ff6699efff6aa20c3752e04d6ba2bc9a565862"} Jan 30 07:24:45 crc kubenswrapper[4520]: I0130 07:24:45.234399 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h88t5" event={"ID":"cee0da0c-55b6-453e-a09f-e53c0f54f530","Type":"ContainerDied","Data":"047b28dbfd3563e358d4806bdc44f734886735d9d4f81158cb845d0c48444818"} Jan 30 07:24:45 crc kubenswrapper[4520]: I0130 07:24:45.234429 4520 scope.go:117] "RemoveContainer" containerID="fd00cb193ec31343fa1f9205f1ff6699efff6aa20c3752e04d6ba2bc9a565862" Jan 30 07:24:45 crc kubenswrapper[4520]: I0130 07:24:45.235051 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h88t5" Jan 30 07:24:45 crc kubenswrapper[4520]: I0130 07:24:45.254789 4520 scope.go:117] "RemoveContainer" containerID="ae9ac997ae319608c061d736adf4e451d0048fcee5d0d43815d5e45fcd60145c" Jan 30 07:24:45 crc kubenswrapper[4520]: I0130 07:24:45.271957 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h88t5"] Jan 30 07:24:45 crc kubenswrapper[4520]: I0130 07:24:45.278680 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h88t5"] Jan 30 07:24:45 crc kubenswrapper[4520]: I0130 07:24:45.283385 4520 scope.go:117] "RemoveContainer" containerID="59fc42d8bcad1b590a0c50547a37e195b5c39592d719cbf12b4346b48a8c0c43" Jan 30 07:24:45 crc kubenswrapper[4520]: I0130 07:24:45.310899 4520 scope.go:117] "RemoveContainer" containerID="fd00cb193ec31343fa1f9205f1ff6699efff6aa20c3752e04d6ba2bc9a565862" Jan 30 07:24:45 crc kubenswrapper[4520]: E0130 07:24:45.311370 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd00cb193ec31343fa1f9205f1ff6699efff6aa20c3752e04d6ba2bc9a565862\": container with ID starting with fd00cb193ec31343fa1f9205f1ff6699efff6aa20c3752e04d6ba2bc9a565862 not found: ID does not exist" containerID="fd00cb193ec31343fa1f9205f1ff6699efff6aa20c3752e04d6ba2bc9a565862" Jan 30 07:24:45 crc kubenswrapper[4520]: I0130 07:24:45.311399 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd00cb193ec31343fa1f9205f1ff6699efff6aa20c3752e04d6ba2bc9a565862"} err="failed to get container status \"fd00cb193ec31343fa1f9205f1ff6699efff6aa20c3752e04d6ba2bc9a565862\": rpc error: code = NotFound desc = could not find container \"fd00cb193ec31343fa1f9205f1ff6699efff6aa20c3752e04d6ba2bc9a565862\": container with ID starting with fd00cb193ec31343fa1f9205f1ff6699efff6aa20c3752e04d6ba2bc9a565862 not found: ID does not exist" Jan 30 07:24:45 crc kubenswrapper[4520]: I0130 07:24:45.311428 4520 scope.go:117] "RemoveContainer" containerID="ae9ac997ae319608c061d736adf4e451d0048fcee5d0d43815d5e45fcd60145c" Jan 30 07:24:45 crc kubenswrapper[4520]: E0130 07:24:45.311768 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae9ac997ae319608c061d736adf4e451d0048fcee5d0d43815d5e45fcd60145c\": container with ID starting with ae9ac997ae319608c061d736adf4e451d0048fcee5d0d43815d5e45fcd60145c not found: ID does not exist" containerID="ae9ac997ae319608c061d736adf4e451d0048fcee5d0d43815d5e45fcd60145c" Jan 30 07:24:45 crc kubenswrapper[4520]: I0130 07:24:45.311813 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae9ac997ae319608c061d736adf4e451d0048fcee5d0d43815d5e45fcd60145c"} err="failed to get container status \"ae9ac997ae319608c061d736adf4e451d0048fcee5d0d43815d5e45fcd60145c\": rpc error: code = NotFound desc = could not find container \"ae9ac997ae319608c061d736adf4e451d0048fcee5d0d43815d5e45fcd60145c\": container with ID starting with ae9ac997ae319608c061d736adf4e451d0048fcee5d0d43815d5e45fcd60145c not found: ID does not exist" Jan 30 07:24:45 crc kubenswrapper[4520]: I0130 07:24:45.311837 4520 scope.go:117] "RemoveContainer" containerID="59fc42d8bcad1b590a0c50547a37e195b5c39592d719cbf12b4346b48a8c0c43" Jan 30 07:24:45 crc kubenswrapper[4520]: E0130 07:24:45.312106 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59fc42d8bcad1b590a0c50547a37e195b5c39592d719cbf12b4346b48a8c0c43\": container with ID starting with 59fc42d8bcad1b590a0c50547a37e195b5c39592d719cbf12b4346b48a8c0c43 not found: ID does not exist" containerID="59fc42d8bcad1b590a0c50547a37e195b5c39592d719cbf12b4346b48a8c0c43" Jan 30 07:24:45 crc kubenswrapper[4520]: I0130 07:24:45.312128 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59fc42d8bcad1b590a0c50547a37e195b5c39592d719cbf12b4346b48a8c0c43"} err="failed to get container status \"59fc42d8bcad1b590a0c50547a37e195b5c39592d719cbf12b4346b48a8c0c43\": rpc error: code = NotFound desc = could not find container \"59fc42d8bcad1b590a0c50547a37e195b5c39592d719cbf12b4346b48a8c0c43\": container with ID starting with 59fc42d8bcad1b590a0c50547a37e195b5c39592d719cbf12b4346b48a8c0c43 not found: ID does not exist" Jan 30 07:24:46 crc kubenswrapper[4520]: I0130 07:24:46.734213 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cee0da0c-55b6-453e-a09f-e53c0f54f530" path="/var/lib/kubelet/pods/cee0da0c-55b6-453e-a09f-e53c0f54f530/volumes" Jan 30 07:24:57 crc kubenswrapper[4520]: I0130 07:24:57.793436 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 07:24:57 crc kubenswrapper[4520]: I0130 07:24:57.794025 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 07:24:57 crc kubenswrapper[4520]: I0130 07:24:57.794080 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" Jan 30 07:24:57 crc kubenswrapper[4520]: I0130 07:24:57.795059 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1beef8304148e4cc2c3110a7989a565521e20531d370fde345640d21f67715a9"} pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 07:24:57 crc kubenswrapper[4520]: I0130 07:24:57.795125 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" containerID="cri-o://1beef8304148e4cc2c3110a7989a565521e20531d370fde345640d21f67715a9" gracePeriod=600 Jan 30 07:24:57 crc kubenswrapper[4520]: E0130 07:24:57.946254 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:24:58 crc kubenswrapper[4520]: I0130 07:24:58.355916 4520 generic.go:334] "Generic (PLEG): container finished" podID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerID="1beef8304148e4cc2c3110a7989a565521e20531d370fde345640d21f67715a9" exitCode=0 Jan 30 07:24:58 crc kubenswrapper[4520]: I0130 07:24:58.355979 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerDied","Data":"1beef8304148e4cc2c3110a7989a565521e20531d370fde345640d21f67715a9"} Jan 30 07:24:58 crc kubenswrapper[4520]: I0130 07:24:58.356032 4520 scope.go:117] "RemoveContainer" containerID="6f9d3d41b0a37515cd60005bd2f7590ed422a66a445f98c088e023f788133e52" Jan 30 07:24:58 crc kubenswrapper[4520]: I0130 07:24:58.356720 4520 scope.go:117] "RemoveContainer" containerID="1beef8304148e4cc2c3110a7989a565521e20531d370fde345640d21f67715a9" Jan 30 07:24:58 crc kubenswrapper[4520]: E0130 07:24:58.357090 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:25:11 crc kubenswrapper[4520]: I0130 07:25:11.685417 4520 scope.go:117] "RemoveContainer" containerID="1beef8304148e4cc2c3110a7989a565521e20531d370fde345640d21f67715a9" Jan 30 07:25:11 crc kubenswrapper[4520]: E0130 07:25:11.686393 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:25:19 crc kubenswrapper[4520]: I0130 07:25:19.815496 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Jan 30 07:25:19 crc kubenswrapper[4520]: E0130 07:25:19.816464 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cee0da0c-55b6-453e-a09f-e53c0f54f530" containerName="extract-utilities" Jan 30 07:25:19 crc kubenswrapper[4520]: I0130 07:25:19.816480 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="cee0da0c-55b6-453e-a09f-e53c0f54f530" containerName="extract-utilities" Jan 30 07:25:19 crc kubenswrapper[4520]: E0130 07:25:19.816535 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cee0da0c-55b6-453e-a09f-e53c0f54f530" containerName="registry-server" Jan 30 07:25:19 crc kubenswrapper[4520]: I0130 07:25:19.816542 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="cee0da0c-55b6-453e-a09f-e53c0f54f530" containerName="registry-server" Jan 30 07:25:19 crc kubenswrapper[4520]: E0130 07:25:19.816555 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cee0da0c-55b6-453e-a09f-e53c0f54f530" containerName="extract-content" Jan 30 07:25:19 crc kubenswrapper[4520]: I0130 07:25:19.816560 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="cee0da0c-55b6-453e-a09f-e53c0f54f530" containerName="extract-content" Jan 30 07:25:19 crc kubenswrapper[4520]: I0130 07:25:19.816752 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="cee0da0c-55b6-453e-a09f-e53c0f54f530" containerName="registry-server" Jan 30 07:25:19 crc kubenswrapper[4520]: I0130 07:25:19.817431 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 07:25:19 crc kubenswrapper[4520]: I0130 07:25:19.818942 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 30 07:25:19 crc kubenswrapper[4520]: I0130 07:25:19.822859 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-r5wqx" Jan 30 07:25:19 crc kubenswrapper[4520]: I0130 07:25:19.823610 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 30 07:25:19 crc kubenswrapper[4520]: I0130 07:25:19.826423 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Jan 30 07:25:19 crc kubenswrapper[4520]: I0130 07:25:19.827369 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 30 07:25:19 crc kubenswrapper[4520]: I0130 07:25:19.851098 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8ee881d4-3f07-49b1-8444-b15c5b868b9e-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 07:25:19 crc kubenswrapper[4520]: I0130 07:25:19.851135 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8ee881d4-3f07-49b1-8444-b15c5b868b9e-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 07:25:19 crc kubenswrapper[4520]: I0130 07:25:19.851203 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8ee881d4-3f07-49b1-8444-b15c5b868b9e-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 07:25:19 crc kubenswrapper[4520]: I0130 07:25:19.953461 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 07:25:19 crc kubenswrapper[4520]: I0130 07:25:19.953535 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/8ee881d4-3f07-49b1-8444-b15c5b868b9e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 07:25:19 crc kubenswrapper[4520]: I0130 07:25:19.953613 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl5jw\" (UniqueName: \"kubernetes.io/projected/8ee881d4-3f07-49b1-8444-b15c5b868b9e-kube-api-access-jl5jw\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 07:25:19 crc kubenswrapper[4520]: I0130 07:25:19.953733 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8ee881d4-3f07-49b1-8444-b15c5b868b9e-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 07:25:19 crc kubenswrapper[4520]: I0130 07:25:19.953760 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8ee881d4-3f07-49b1-8444-b15c5b868b9e-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 07:25:19 crc kubenswrapper[4520]: I0130 07:25:19.953863 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8ee881d4-3f07-49b1-8444-b15c5b868b9e-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 07:25:19 crc kubenswrapper[4520]: I0130 07:25:19.953899 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/8ee881d4-3f07-49b1-8444-b15c5b868b9e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 07:25:19 crc kubenswrapper[4520]: I0130 07:25:19.953920 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/8ee881d4-3f07-49b1-8444-b15c5b868b9e-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 07:25:19 crc kubenswrapper[4520]: I0130 07:25:19.953949 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8ee881d4-3f07-49b1-8444-b15c5b868b9e-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 07:25:19 crc kubenswrapper[4520]: I0130 07:25:19.955252 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8ee881d4-3f07-49b1-8444-b15c5b868b9e-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 07:25:19 crc kubenswrapper[4520]: I0130 07:25:19.956944 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8ee881d4-3f07-49b1-8444-b15c5b868b9e-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 07:25:19 crc kubenswrapper[4520]: I0130 07:25:19.969241 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8ee881d4-3f07-49b1-8444-b15c5b868b9e-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 07:25:20 crc kubenswrapper[4520]: I0130 07:25:20.055900 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 07:25:20 crc kubenswrapper[4520]: I0130 07:25:20.055954 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/8ee881d4-3f07-49b1-8444-b15c5b868b9e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 07:25:20 crc kubenswrapper[4520]: I0130 07:25:20.056428 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/8ee881d4-3f07-49b1-8444-b15c5b868b9e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 07:25:20 crc kubenswrapper[4520]: I0130 07:25:20.057069 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl5jw\" (UniqueName: \"kubernetes.io/projected/8ee881d4-3f07-49b1-8444-b15c5b868b9e-kube-api-access-jl5jw\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 07:25:20 crc kubenswrapper[4520]: I0130 07:25:20.057731 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8ee881d4-3f07-49b1-8444-b15c5b868b9e-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 07:25:20 crc kubenswrapper[4520]: I0130 07:25:20.057829 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/8ee881d4-3f07-49b1-8444-b15c5b868b9e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 07:25:20 crc kubenswrapper[4520]: I0130 07:25:20.057867 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/8ee881d4-3f07-49b1-8444-b15c5b868b9e-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 07:25:20 crc kubenswrapper[4520]: I0130 07:25:20.057909 4520 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 07:25:20 crc kubenswrapper[4520]: I0130 07:25:20.058090 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/8ee881d4-3f07-49b1-8444-b15c5b868b9e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 07:25:20 crc kubenswrapper[4520]: I0130 07:25:20.062675 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8ee881d4-3f07-49b1-8444-b15c5b868b9e-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 07:25:20 crc kubenswrapper[4520]: I0130 07:25:20.063294 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/8ee881d4-3f07-49b1-8444-b15c5b868b9e-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 07:25:20 crc kubenswrapper[4520]: I0130 07:25:20.078185 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl5jw\" (UniqueName: \"kubernetes.io/projected/8ee881d4-3f07-49b1-8444-b15c5b868b9e-kube-api-access-jl5jw\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 07:25:20 crc kubenswrapper[4520]: I0130 07:25:20.084729 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 07:25:20 crc kubenswrapper[4520]: I0130 07:25:20.135192 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 07:25:20 crc kubenswrapper[4520]: I0130 07:25:20.698845 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Jan 30 07:25:21 crc kubenswrapper[4520]: I0130 07:25:21.590800 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"8ee881d4-3f07-49b1-8444-b15c5b868b9e","Type":"ContainerStarted","Data":"2b9ffcb34695814d706fac6ab9558ab60ebbfe392522811988d30b560ef171c9"} Jan 30 07:25:26 crc kubenswrapper[4520]: I0130 07:25:26.694860 4520 scope.go:117] "RemoveContainer" containerID="1beef8304148e4cc2c3110a7989a565521e20531d370fde345640d21f67715a9" Jan 30 07:25:26 crc kubenswrapper[4520]: E0130 07:25:26.695949 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:25:40 crc kubenswrapper[4520]: I0130 07:25:40.686273 4520 scope.go:117] "RemoveContainer" containerID="1beef8304148e4cc2c3110a7989a565521e20531d370fde345640d21f67715a9" Jan 30 07:25:40 crc kubenswrapper[4520]: E0130 07:25:40.687382 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:25:51 crc kubenswrapper[4520]: I0130 07:25:51.685882 4520 scope.go:117] "RemoveContainer" containerID="1beef8304148e4cc2c3110a7989a565521e20531d370fde345640d21f67715a9" Jan 30 07:25:51 crc kubenswrapper[4520]: E0130 07:25:51.686683 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:26:04 crc kubenswrapper[4520]: I0130 07:26:04.686440 4520 scope.go:117] "RemoveContainer" containerID="1beef8304148e4cc2c3110a7989a565521e20531d370fde345640d21f67715a9" Jan 30 07:26:04 crc kubenswrapper[4520]: E0130 07:26:04.687627 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:26:17 crc kubenswrapper[4520]: I0130 07:26:17.685652 4520 scope.go:117] "RemoveContainer" containerID="1beef8304148e4cc2c3110a7989a565521e20531d370fde345640d21f67715a9" Jan 30 07:26:17 crc kubenswrapper[4520]: E0130 07:26:17.686533 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:26:27 crc kubenswrapper[4520]: E0130 07:26:27.755925 4520 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:b85d0548925081ae8c6bdd697658cec4" Jan 30 07:26:27 crc kubenswrapper[4520]: E0130 07:26:27.756582 4520 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:b85d0548925081ae8c6bdd697658cec4" Jan 30 07:26:27 crc kubenswrapper[4520]: E0130 07:26:27.759279 4520 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:b85d0548925081ae8c6bdd697658cec4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jl5jw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest-s00-multi-thread-testing_openstack(8ee881d4-3f07-49b1-8444-b15c5b868b9e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 07:26:27 crc kubenswrapper[4520]: E0130 07:26:27.760508 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podUID="8ee881d4-3f07-49b1-8444-b15c5b868b9e" Jan 30 07:26:28 crc kubenswrapper[4520]: E0130 07:26:28.352975 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:b85d0548925081ae8c6bdd697658cec4\\\"\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podUID="8ee881d4-3f07-49b1-8444-b15c5b868b9e" Jan 30 07:26:30 crc kubenswrapper[4520]: I0130 07:26:30.686183 4520 scope.go:117] "RemoveContainer" containerID="1beef8304148e4cc2c3110a7989a565521e20531d370fde345640d21f67715a9" Jan 30 07:26:30 crc kubenswrapper[4520]: E0130 07:26:30.687251 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:26:39 crc kubenswrapper[4520]: I0130 07:26:39.687365 4520 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 07:26:40 crc kubenswrapper[4520]: I0130 07:26:40.321040 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 30 07:26:41 crc kubenswrapper[4520]: I0130 07:26:41.448882 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"8ee881d4-3f07-49b1-8444-b15c5b868b9e","Type":"ContainerStarted","Data":"de36bf21a16f7f9a539415d5c073f92fba78998bc011ffeefba3ff90c571029c"} Jan 30 07:26:41 crc kubenswrapper[4520]: I0130 07:26:41.476712 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podStartSLOduration=3.871189052 podStartE2EDuration="1m23.476693666s" podCreationTimestamp="2026-01-30 07:25:18 +0000 UTC" firstStartedPulling="2026-01-30 07:25:20.712114318 +0000 UTC m=+2434.340466489" lastFinishedPulling="2026-01-30 07:26:40.317618922 +0000 UTC m=+2513.945971103" observedRunningTime="2026-01-30 07:26:41.466887529 +0000 UTC m=+2515.095239709" watchObservedRunningTime="2026-01-30 07:26:41.476693666 +0000 UTC m=+2515.105045847" Jan 30 07:26:43 crc kubenswrapper[4520]: I0130 07:26:43.686159 4520 scope.go:117] "RemoveContainer" containerID="1beef8304148e4cc2c3110a7989a565521e20531d370fde345640d21f67715a9" Jan 30 07:26:43 crc kubenswrapper[4520]: E0130 07:26:43.686737 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:26:55 crc kubenswrapper[4520]: I0130 07:26:55.685957 4520 scope.go:117] "RemoveContainer" containerID="1beef8304148e4cc2c3110a7989a565521e20531d370fde345640d21f67715a9" Jan 30 07:26:55 crc kubenswrapper[4520]: E0130 07:26:55.686865 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:27:06 crc kubenswrapper[4520]: I0130 07:27:06.691554 4520 scope.go:117] "RemoveContainer" containerID="1beef8304148e4cc2c3110a7989a565521e20531d370fde345640d21f67715a9" Jan 30 07:27:06 crc kubenswrapper[4520]: E0130 07:27:06.692736 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:27:20 crc kubenswrapper[4520]: I0130 07:27:20.686281 4520 scope.go:117] "RemoveContainer" containerID="1beef8304148e4cc2c3110a7989a565521e20531d370fde345640d21f67715a9" Jan 30 07:27:20 crc kubenswrapper[4520]: E0130 07:27:20.687034 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:27:31 crc kubenswrapper[4520]: I0130 07:27:31.686260 4520 scope.go:117] "RemoveContainer" containerID="1beef8304148e4cc2c3110a7989a565521e20531d370fde345640d21f67715a9" Jan 30 07:27:31 crc kubenswrapper[4520]: E0130 07:27:31.687231 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:27:46 crc kubenswrapper[4520]: I0130 07:27:46.691684 4520 scope.go:117] "RemoveContainer" containerID="1beef8304148e4cc2c3110a7989a565521e20531d370fde345640d21f67715a9" Jan 30 07:27:46 crc kubenswrapper[4520]: E0130 07:27:46.692906 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:28:01 crc kubenswrapper[4520]: I0130 07:28:01.686022 4520 scope.go:117] "RemoveContainer" containerID="1beef8304148e4cc2c3110a7989a565521e20531d370fde345640d21f67715a9" Jan 30 07:28:01 crc kubenswrapper[4520]: E0130 07:28:01.686859 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:28:15 crc kubenswrapper[4520]: I0130 07:28:15.685884 4520 scope.go:117] "RemoveContainer" containerID="1beef8304148e4cc2c3110a7989a565521e20531d370fde345640d21f67715a9" Jan 30 07:28:15 crc kubenswrapper[4520]: E0130 07:28:15.686728 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:28:27 crc kubenswrapper[4520]: I0130 07:28:27.686275 4520 scope.go:117] "RemoveContainer" containerID="1beef8304148e4cc2c3110a7989a565521e20531d370fde345640d21f67715a9" Jan 30 07:28:27 crc kubenswrapper[4520]: E0130 07:28:27.687227 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:28:40 crc kubenswrapper[4520]: I0130 07:28:40.686311 4520 scope.go:117] "RemoveContainer" containerID="1beef8304148e4cc2c3110a7989a565521e20531d370fde345640d21f67715a9" Jan 30 07:28:40 crc kubenswrapper[4520]: E0130 07:28:40.687227 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:28:53 crc kubenswrapper[4520]: I0130 07:28:53.686197 4520 scope.go:117] "RemoveContainer" containerID="1beef8304148e4cc2c3110a7989a565521e20531d370fde345640d21f67715a9" Jan 30 07:28:53 crc kubenswrapper[4520]: E0130 07:28:53.687083 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:29:07 crc kubenswrapper[4520]: I0130 07:29:07.685965 4520 scope.go:117] "RemoveContainer" containerID="1beef8304148e4cc2c3110a7989a565521e20531d370fde345640d21f67715a9" Jan 30 07:29:07 crc kubenswrapper[4520]: E0130 07:29:07.687453 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:29:18 crc kubenswrapper[4520]: I0130 07:29:18.686262 4520 scope.go:117] "RemoveContainer" containerID="1beef8304148e4cc2c3110a7989a565521e20531d370fde345640d21f67715a9" Jan 30 07:29:18 crc kubenswrapper[4520]: E0130 07:29:18.686901 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:29:33 crc kubenswrapper[4520]: I0130 07:29:33.686063 4520 scope.go:117] "RemoveContainer" containerID="1beef8304148e4cc2c3110a7989a565521e20531d370fde345640d21f67715a9" Jan 30 07:29:33 crc kubenswrapper[4520]: E0130 07:29:33.686657 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:29:46 crc kubenswrapper[4520]: I0130 07:29:46.685100 4520 scope.go:117] "RemoveContainer" containerID="1beef8304148e4cc2c3110a7989a565521e20531d370fde345640d21f67715a9" Jan 30 07:29:46 crc kubenswrapper[4520]: E0130 07:29:46.685731 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:29:58 crc kubenswrapper[4520]: I0130 07:29:58.685311 4520 scope.go:117] "RemoveContainer" containerID="1beef8304148e4cc2c3110a7989a565521e20531d370fde345640d21f67715a9" Jan 30 07:29:59 crc kubenswrapper[4520]: I0130 07:29:59.047192 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerStarted","Data":"9c67b32e626e8044395dd14b425d076fe6cc0a4c2cba075a6abd7ac90514df27"} Jan 30 07:30:00 crc kubenswrapper[4520]: I0130 07:30:00.272845 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495970-ndnq9"] Jan 30 07:30:00 crc kubenswrapper[4520]: I0130 07:30:00.278636 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495970-ndnq9" Jan 30 07:30:00 crc kubenswrapper[4520]: I0130 07:30:00.281001 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 07:30:00 crc kubenswrapper[4520]: I0130 07:30:00.324870 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f064475-503c-498a-a244-60ec4c850544-config-volume\") pod \"collect-profiles-29495970-ndnq9\" (UID: \"5f064475-503c-498a-a244-60ec4c850544\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495970-ndnq9" Jan 30 07:30:00 crc kubenswrapper[4520]: I0130 07:30:00.324961 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5f064475-503c-498a-a244-60ec4c850544-secret-volume\") pod \"collect-profiles-29495970-ndnq9\" (UID: \"5f064475-503c-498a-a244-60ec4c850544\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495970-ndnq9" Jan 30 07:30:00 crc kubenswrapper[4520]: I0130 07:30:00.325151 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggl9m\" (UniqueName: \"kubernetes.io/projected/5f064475-503c-498a-a244-60ec4c850544-kube-api-access-ggl9m\") pod \"collect-profiles-29495970-ndnq9\" (UID: \"5f064475-503c-498a-a244-60ec4c850544\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495970-ndnq9" Jan 30 07:30:00 crc kubenswrapper[4520]: I0130 07:30:00.330434 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 07:30:00 crc kubenswrapper[4520]: I0130 07:30:00.370184 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495970-ndnq9"] Jan 30 07:30:00 crc kubenswrapper[4520]: I0130 07:30:00.426366 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5f064475-503c-498a-a244-60ec4c850544-secret-volume\") pod \"collect-profiles-29495970-ndnq9\" (UID: \"5f064475-503c-498a-a244-60ec4c850544\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495970-ndnq9" Jan 30 07:30:00 crc kubenswrapper[4520]: I0130 07:30:00.426714 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggl9m\" (UniqueName: \"kubernetes.io/projected/5f064475-503c-498a-a244-60ec4c850544-kube-api-access-ggl9m\") pod \"collect-profiles-29495970-ndnq9\" (UID: \"5f064475-503c-498a-a244-60ec4c850544\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495970-ndnq9" Jan 30 07:30:00 crc kubenswrapper[4520]: I0130 07:30:00.426821 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f064475-503c-498a-a244-60ec4c850544-config-volume\") pod \"collect-profiles-29495970-ndnq9\" (UID: \"5f064475-503c-498a-a244-60ec4c850544\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495970-ndnq9" Jan 30 07:30:00 crc kubenswrapper[4520]: I0130 07:30:00.431304 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f064475-503c-498a-a244-60ec4c850544-config-volume\") pod \"collect-profiles-29495970-ndnq9\" (UID: \"5f064475-503c-498a-a244-60ec4c850544\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495970-ndnq9" Jan 30 07:30:00 crc kubenswrapper[4520]: I0130 07:30:00.434994 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5f064475-503c-498a-a244-60ec4c850544-secret-volume\") pod \"collect-profiles-29495970-ndnq9\" (UID: \"5f064475-503c-498a-a244-60ec4c850544\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495970-ndnq9" Jan 30 07:30:00 crc kubenswrapper[4520]: I0130 07:30:00.450243 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggl9m\" (UniqueName: \"kubernetes.io/projected/5f064475-503c-498a-a244-60ec4c850544-kube-api-access-ggl9m\") pod \"collect-profiles-29495970-ndnq9\" (UID: \"5f064475-503c-498a-a244-60ec4c850544\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495970-ndnq9" Jan 30 07:30:00 crc kubenswrapper[4520]: I0130 07:30:00.595607 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495970-ndnq9" Jan 30 07:30:01 crc kubenswrapper[4520]: E0130 07:30:01.165746 4520 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.25.87:52424->192.168.25.87:39417: write tcp 192.168.25.87:52424->192.168.25.87:39417: write: broken pipe Jan 30 07:30:01 crc kubenswrapper[4520]: I0130 07:30:01.372689 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495970-ndnq9"] Jan 30 07:30:02 crc kubenswrapper[4520]: I0130 07:30:02.076902 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495970-ndnq9" event={"ID":"5f064475-503c-498a-a244-60ec4c850544","Type":"ContainerDied","Data":"3c8d72686e0199f137341834da2fcb6279bdeeeef513acd33b269e419f473353"} Jan 30 07:30:02 crc kubenswrapper[4520]: I0130 07:30:02.077453 4520 generic.go:334] "Generic (PLEG): container finished" podID="5f064475-503c-498a-a244-60ec4c850544" containerID="3c8d72686e0199f137341834da2fcb6279bdeeeef513acd33b269e419f473353" exitCode=0 Jan 30 07:30:02 crc kubenswrapper[4520]: I0130 07:30:02.077508 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495970-ndnq9" event={"ID":"5f064475-503c-498a-a244-60ec4c850544","Type":"ContainerStarted","Data":"d2694e8e4d32c3a91b23c9ebfecee0a6e014a87e5de72436cfb7fa527cb06b9c"} Jan 30 07:30:03 crc kubenswrapper[4520]: I0130 07:30:03.511111 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495970-ndnq9" Jan 30 07:30:03 crc kubenswrapper[4520]: I0130 07:30:03.594200 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f064475-503c-498a-a244-60ec4c850544-config-volume\") pod \"5f064475-503c-498a-a244-60ec4c850544\" (UID: \"5f064475-503c-498a-a244-60ec4c850544\") " Jan 30 07:30:03 crc kubenswrapper[4520]: I0130 07:30:03.594332 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggl9m\" (UniqueName: \"kubernetes.io/projected/5f064475-503c-498a-a244-60ec4c850544-kube-api-access-ggl9m\") pod \"5f064475-503c-498a-a244-60ec4c850544\" (UID: \"5f064475-503c-498a-a244-60ec4c850544\") " Jan 30 07:30:03 crc kubenswrapper[4520]: I0130 07:30:03.594374 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5f064475-503c-498a-a244-60ec4c850544-secret-volume\") pod \"5f064475-503c-498a-a244-60ec4c850544\" (UID: \"5f064475-503c-498a-a244-60ec4c850544\") " Jan 30 07:30:03 crc kubenswrapper[4520]: I0130 07:30:03.595381 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f064475-503c-498a-a244-60ec4c850544-config-volume" (OuterVolumeSpecName: "config-volume") pod "5f064475-503c-498a-a244-60ec4c850544" (UID: "5f064475-503c-498a-a244-60ec4c850544"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:30:03 crc kubenswrapper[4520]: I0130 07:30:03.600730 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f064475-503c-498a-a244-60ec4c850544-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5f064475-503c-498a-a244-60ec4c850544" (UID: "5f064475-503c-498a-a244-60ec4c850544"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:30:03 crc kubenswrapper[4520]: I0130 07:30:03.600877 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f064475-503c-498a-a244-60ec4c850544-kube-api-access-ggl9m" (OuterVolumeSpecName: "kube-api-access-ggl9m") pod "5f064475-503c-498a-a244-60ec4c850544" (UID: "5f064475-503c-498a-a244-60ec4c850544"). InnerVolumeSpecName "kube-api-access-ggl9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:30:03 crc kubenswrapper[4520]: I0130 07:30:03.696415 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggl9m\" (UniqueName: \"kubernetes.io/projected/5f064475-503c-498a-a244-60ec4c850544-kube-api-access-ggl9m\") on node \"crc\" DevicePath \"\"" Jan 30 07:30:03 crc kubenswrapper[4520]: I0130 07:30:03.696444 4520 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5f064475-503c-498a-a244-60ec4c850544-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 07:30:03 crc kubenswrapper[4520]: I0130 07:30:03.696455 4520 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f064475-503c-498a-a244-60ec4c850544-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 07:30:04 crc kubenswrapper[4520]: I0130 07:30:04.092431 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495970-ndnq9" event={"ID":"5f064475-503c-498a-a244-60ec4c850544","Type":"ContainerDied","Data":"d2694e8e4d32c3a91b23c9ebfecee0a6e014a87e5de72436cfb7fa527cb06b9c"} Jan 30 07:30:04 crc kubenswrapper[4520]: I0130 07:30:04.092472 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495970-ndnq9" Jan 30 07:30:04 crc kubenswrapper[4520]: I0130 07:30:04.093179 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2694e8e4d32c3a91b23c9ebfecee0a6e014a87e5de72436cfb7fa527cb06b9c" Jan 30 07:30:04 crc kubenswrapper[4520]: I0130 07:30:04.585445 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495925-q62ms"] Jan 30 07:30:04 crc kubenswrapper[4520]: I0130 07:30:04.591087 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495925-q62ms"] Jan 30 07:30:04 crc kubenswrapper[4520]: I0130 07:30:04.694207 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="350b6a45-2c99-453a-9e85-e97a1adc863d" path="/var/lib/kubelet/pods/350b6a45-2c99-453a-9e85-e97a1adc863d/volumes" Jan 30 07:30:56 crc kubenswrapper[4520]: I0130 07:30:56.364641 4520 scope.go:117] "RemoveContainer" containerID="a36b9458379423d9fd6eff8752f0459f1728424413b43cc5badf4cfbf94e397b" Jan 30 07:32:27 crc kubenswrapper[4520]: I0130 07:32:27.795833 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 07:32:27 crc kubenswrapper[4520]: I0130 07:32:27.798772 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 07:32:57 crc kubenswrapper[4520]: I0130 07:32:57.793480 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 07:32:57 crc kubenswrapper[4520]: I0130 07:32:57.793899 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 07:33:27 crc kubenswrapper[4520]: I0130 07:33:27.793151 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 07:33:27 crc kubenswrapper[4520]: I0130 07:33:27.793728 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 07:33:27 crc kubenswrapper[4520]: I0130 07:33:27.794945 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" Jan 30 07:33:27 crc kubenswrapper[4520]: I0130 07:33:27.795927 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9c67b32e626e8044395dd14b425d076fe6cc0a4c2cba075a6abd7ac90514df27"} pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 07:33:27 crc kubenswrapper[4520]: I0130 07:33:27.797127 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" containerID="cri-o://9c67b32e626e8044395dd14b425d076fe6cc0a4c2cba075a6abd7ac90514df27" gracePeriod=600 Jan 30 07:33:28 crc kubenswrapper[4520]: I0130 07:33:28.580648 4520 generic.go:334] "Generic (PLEG): container finished" podID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerID="9c67b32e626e8044395dd14b425d076fe6cc0a4c2cba075a6abd7ac90514df27" exitCode=0 Jan 30 07:33:28 crc kubenswrapper[4520]: I0130 07:33:28.580833 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerDied","Data":"9c67b32e626e8044395dd14b425d076fe6cc0a4c2cba075a6abd7ac90514df27"} Jan 30 07:33:28 crc kubenswrapper[4520]: I0130 07:33:28.581511 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerStarted","Data":"5e0e6d3d22c8852b924c77449e25c4f60aadf93185a67fb78587771f3642aa6b"} Jan 30 07:33:28 crc kubenswrapper[4520]: I0130 07:33:28.583579 4520 scope.go:117] "RemoveContainer" containerID="1beef8304148e4cc2c3110a7989a565521e20531d370fde345640d21f67715a9" Jan 30 07:35:51 crc kubenswrapper[4520]: I0130 07:35:51.870093 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bdzk7"] Jan 30 07:35:51 crc kubenswrapper[4520]: E0130 07:35:51.873480 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f064475-503c-498a-a244-60ec4c850544" containerName="collect-profiles" Jan 30 07:35:51 crc kubenswrapper[4520]: I0130 07:35:51.873527 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f064475-503c-498a-a244-60ec4c850544" containerName="collect-profiles" Jan 30 07:35:51 crc kubenswrapper[4520]: I0130 07:35:51.875132 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f064475-503c-498a-a244-60ec4c850544" containerName="collect-profiles" Jan 30 07:35:51 crc kubenswrapper[4520]: I0130 07:35:51.879705 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bdzk7" Jan 30 07:35:51 crc kubenswrapper[4520]: I0130 07:35:51.988339 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxh7f\" (UniqueName: \"kubernetes.io/projected/ae069169-ae3c-4838-b139-719c779adbd6-kube-api-access-dxh7f\") pod \"certified-operators-bdzk7\" (UID: \"ae069169-ae3c-4838-b139-719c779adbd6\") " pod="openshift-marketplace/certified-operators-bdzk7" Jan 30 07:35:51 crc kubenswrapper[4520]: I0130 07:35:51.988597 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae069169-ae3c-4838-b139-719c779adbd6-utilities\") pod \"certified-operators-bdzk7\" (UID: \"ae069169-ae3c-4838-b139-719c779adbd6\") " pod="openshift-marketplace/certified-operators-bdzk7" Jan 30 07:35:51 crc kubenswrapper[4520]: I0130 07:35:51.988644 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae069169-ae3c-4838-b139-719c779adbd6-catalog-content\") pod \"certified-operators-bdzk7\" (UID: \"ae069169-ae3c-4838-b139-719c779adbd6\") " pod="openshift-marketplace/certified-operators-bdzk7" Jan 30 07:35:52 crc kubenswrapper[4520]: I0130 07:35:52.089770 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae069169-ae3c-4838-b139-719c779adbd6-catalog-content\") pod \"certified-operators-bdzk7\" (UID: \"ae069169-ae3c-4838-b139-719c779adbd6\") " pod="openshift-marketplace/certified-operators-bdzk7" Jan 30 07:35:52 crc kubenswrapper[4520]: I0130 07:35:52.090142 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxh7f\" (UniqueName: \"kubernetes.io/projected/ae069169-ae3c-4838-b139-719c779adbd6-kube-api-access-dxh7f\") pod \"certified-operators-bdzk7\" (UID: \"ae069169-ae3c-4838-b139-719c779adbd6\") " pod="openshift-marketplace/certified-operators-bdzk7" Jan 30 07:35:52 crc kubenswrapper[4520]: I0130 07:35:52.090229 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae069169-ae3c-4838-b139-719c779adbd6-utilities\") pod \"certified-operators-bdzk7\" (UID: \"ae069169-ae3c-4838-b139-719c779adbd6\") " pod="openshift-marketplace/certified-operators-bdzk7" Jan 30 07:35:52 crc kubenswrapper[4520]: I0130 07:35:52.094378 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae069169-ae3c-4838-b139-719c779adbd6-catalog-content\") pod \"certified-operators-bdzk7\" (UID: \"ae069169-ae3c-4838-b139-719c779adbd6\") " pod="openshift-marketplace/certified-operators-bdzk7" Jan 30 07:35:52 crc kubenswrapper[4520]: I0130 07:35:52.095324 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae069169-ae3c-4838-b139-719c779adbd6-utilities\") pod \"certified-operators-bdzk7\" (UID: \"ae069169-ae3c-4838-b139-719c779adbd6\") " pod="openshift-marketplace/certified-operators-bdzk7" Jan 30 07:35:52 crc kubenswrapper[4520]: I0130 07:35:52.120187 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxh7f\" (UniqueName: \"kubernetes.io/projected/ae069169-ae3c-4838-b139-719c779adbd6-kube-api-access-dxh7f\") pod \"certified-operators-bdzk7\" (UID: \"ae069169-ae3c-4838-b139-719c779adbd6\") " pod="openshift-marketplace/certified-operators-bdzk7" Jan 30 07:35:52 crc kubenswrapper[4520]: I0130 07:35:52.170174 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bdzk7"] Jan 30 07:35:52 crc kubenswrapper[4520]: I0130 07:35:52.199783 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bdzk7" Jan 30 07:35:53 crc kubenswrapper[4520]: I0130 07:35:53.035464 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bdzk7"] Jan 30 07:35:53 crc kubenswrapper[4520]: I0130 07:35:53.732814 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bdzk7" event={"ID":"ae069169-ae3c-4838-b139-719c779adbd6","Type":"ContainerDied","Data":"43801e34aed0c0ea8dcbbf0e195fa983e709978bdfcf52c345e007716a607bd6"} Jan 30 07:35:53 crc kubenswrapper[4520]: I0130 07:35:53.733804 4520 generic.go:334] "Generic (PLEG): container finished" podID="ae069169-ae3c-4838-b139-719c779adbd6" containerID="43801e34aed0c0ea8dcbbf0e195fa983e709978bdfcf52c345e007716a607bd6" exitCode=0 Jan 30 07:35:53 crc kubenswrapper[4520]: I0130 07:35:53.733872 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bdzk7" event={"ID":"ae069169-ae3c-4838-b139-719c779adbd6","Type":"ContainerStarted","Data":"2c26ac2be6ac5b1aa49b11426c52ccce0742cbcc0a3fe57e0c5dbd3172aa60ab"} Jan 30 07:35:53 crc kubenswrapper[4520]: I0130 07:35:53.741670 4520 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 07:35:54 crc kubenswrapper[4520]: I0130 07:35:54.745195 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bdzk7" event={"ID":"ae069169-ae3c-4838-b139-719c779adbd6","Type":"ContainerStarted","Data":"a708a5f137838545310620c1583bf8c4407e242ba89f2002c6c63c62f9cc5094"} Jan 30 07:35:54 crc kubenswrapper[4520]: E0130 07:35:54.833454 4520 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.25.87:37388->192.168.25.87:39417: write tcp 192.168.25.87:37388->192.168.25.87:39417: write: broken pipe Jan 30 07:35:55 crc kubenswrapper[4520]: E0130 07:35:55.531401 4520 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.25.87:37406->192.168.25.87:39417: write tcp 192.168.25.87:37406->192.168.25.87:39417: write: connection reset by peer Jan 30 07:35:56 crc kubenswrapper[4520]: I0130 07:35:56.792021 4520 generic.go:334] "Generic (PLEG): container finished" podID="ae069169-ae3c-4838-b139-719c779adbd6" containerID="a708a5f137838545310620c1583bf8c4407e242ba89f2002c6c63c62f9cc5094" exitCode=0 Jan 30 07:35:56 crc kubenswrapper[4520]: I0130 07:35:56.792383 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bdzk7" event={"ID":"ae069169-ae3c-4838-b139-719c779adbd6","Type":"ContainerDied","Data":"a708a5f137838545310620c1583bf8c4407e242ba89f2002c6c63c62f9cc5094"} Jan 30 07:35:57 crc kubenswrapper[4520]: I0130 07:35:57.793101 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 07:35:57 crc kubenswrapper[4520]: I0130 07:35:57.794318 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 07:35:57 crc kubenswrapper[4520]: I0130 07:35:57.802422 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bdzk7" event={"ID":"ae069169-ae3c-4838-b139-719c779adbd6","Type":"ContainerStarted","Data":"1e0631a8d70c63516bd1ef308f72fc907ffe1372a93d815b1d5ab35b349d4696"} Jan 30 07:35:57 crc kubenswrapper[4520]: I0130 07:35:57.829967 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bdzk7" podStartSLOduration=3.2486107029999998 podStartE2EDuration="6.829945578s" podCreationTimestamp="2026-01-30 07:35:51 +0000 UTC" firstStartedPulling="2026-01-30 07:35:53.73552618 +0000 UTC m=+3067.363878362" lastFinishedPulling="2026-01-30 07:35:57.316861057 +0000 UTC m=+3070.945213237" observedRunningTime="2026-01-30 07:35:57.822803279 +0000 UTC m=+3071.451155460" watchObservedRunningTime="2026-01-30 07:35:57.829945578 +0000 UTC m=+3071.458297759" Jan 30 07:36:02 crc kubenswrapper[4520]: I0130 07:36:02.200049 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bdzk7" Jan 30 07:36:02 crc kubenswrapper[4520]: I0130 07:36:02.200366 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bdzk7" Jan 30 07:36:03 crc kubenswrapper[4520]: I0130 07:36:03.243933 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-bdzk7" podUID="ae069169-ae3c-4838-b139-719c779adbd6" containerName="registry-server" probeResult="failure" output=< Jan 30 07:36:03 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 07:36:03 crc kubenswrapper[4520]: > Jan 30 07:36:12 crc kubenswrapper[4520]: I0130 07:36:12.761583 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bdzk7" Jan 30 07:36:12 crc kubenswrapper[4520]: I0130 07:36:12.795600 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bdzk7" Jan 30 07:36:14 crc kubenswrapper[4520]: I0130 07:36:14.052940 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bdzk7"] Jan 30 07:36:14 crc kubenswrapper[4520]: I0130 07:36:14.055581 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bdzk7" podUID="ae069169-ae3c-4838-b139-719c779adbd6" containerName="registry-server" containerID="cri-o://1e0631a8d70c63516bd1ef308f72fc907ffe1372a93d815b1d5ab35b349d4696" gracePeriod=2 Jan 30 07:36:14 crc kubenswrapper[4520]: I0130 07:36:14.935659 4520 generic.go:334] "Generic (PLEG): container finished" podID="ae069169-ae3c-4838-b139-719c779adbd6" containerID="1e0631a8d70c63516bd1ef308f72fc907ffe1372a93d815b1d5ab35b349d4696" exitCode=0 Jan 30 07:36:14 crc kubenswrapper[4520]: I0130 07:36:14.935845 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bdzk7" event={"ID":"ae069169-ae3c-4838-b139-719c779adbd6","Type":"ContainerDied","Data":"1e0631a8d70c63516bd1ef308f72fc907ffe1372a93d815b1d5ab35b349d4696"} Jan 30 07:36:14 crc kubenswrapper[4520]: I0130 07:36:14.936503 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bdzk7" event={"ID":"ae069169-ae3c-4838-b139-719c779adbd6","Type":"ContainerDied","Data":"2c26ac2be6ac5b1aa49b11426c52ccce0742cbcc0a3fe57e0c5dbd3172aa60ab"} Jan 30 07:36:14 crc kubenswrapper[4520]: I0130 07:36:14.936539 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c26ac2be6ac5b1aa49b11426c52ccce0742cbcc0a3fe57e0c5dbd3172aa60ab" Jan 30 07:36:14 crc kubenswrapper[4520]: I0130 07:36:14.978203 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bdzk7" Jan 30 07:36:15 crc kubenswrapper[4520]: I0130 07:36:15.152072 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxh7f\" (UniqueName: \"kubernetes.io/projected/ae069169-ae3c-4838-b139-719c779adbd6-kube-api-access-dxh7f\") pod \"ae069169-ae3c-4838-b139-719c779adbd6\" (UID: \"ae069169-ae3c-4838-b139-719c779adbd6\") " Jan 30 07:36:15 crc kubenswrapper[4520]: I0130 07:36:15.152281 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae069169-ae3c-4838-b139-719c779adbd6-utilities\") pod \"ae069169-ae3c-4838-b139-719c779adbd6\" (UID: \"ae069169-ae3c-4838-b139-719c779adbd6\") " Jan 30 07:36:15 crc kubenswrapper[4520]: I0130 07:36:15.152377 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae069169-ae3c-4838-b139-719c779adbd6-catalog-content\") pod \"ae069169-ae3c-4838-b139-719c779adbd6\" (UID: \"ae069169-ae3c-4838-b139-719c779adbd6\") " Jan 30 07:36:15 crc kubenswrapper[4520]: I0130 07:36:15.155634 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae069169-ae3c-4838-b139-719c779adbd6-utilities" (OuterVolumeSpecName: "utilities") pod "ae069169-ae3c-4838-b139-719c779adbd6" (UID: "ae069169-ae3c-4838-b139-719c779adbd6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:36:15 crc kubenswrapper[4520]: I0130 07:36:15.169680 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae069169-ae3c-4838-b139-719c779adbd6-kube-api-access-dxh7f" (OuterVolumeSpecName: "kube-api-access-dxh7f") pod "ae069169-ae3c-4838-b139-719c779adbd6" (UID: "ae069169-ae3c-4838-b139-719c779adbd6"). InnerVolumeSpecName "kube-api-access-dxh7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:36:15 crc kubenswrapper[4520]: I0130 07:36:15.244028 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae069169-ae3c-4838-b139-719c779adbd6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ae069169-ae3c-4838-b139-719c779adbd6" (UID: "ae069169-ae3c-4838-b139-719c779adbd6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:36:15 crc kubenswrapper[4520]: I0130 07:36:15.254684 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae069169-ae3c-4838-b139-719c779adbd6-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 07:36:15 crc kubenswrapper[4520]: I0130 07:36:15.254710 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae069169-ae3c-4838-b139-719c779adbd6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 07:36:15 crc kubenswrapper[4520]: I0130 07:36:15.254722 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxh7f\" (UniqueName: \"kubernetes.io/projected/ae069169-ae3c-4838-b139-719c779adbd6-kube-api-access-dxh7f\") on node \"crc\" DevicePath \"\"" Jan 30 07:36:15 crc kubenswrapper[4520]: I0130 07:36:15.941311 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bdzk7" Jan 30 07:36:15 crc kubenswrapper[4520]: I0130 07:36:15.967756 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bdzk7"] Jan 30 07:36:15 crc kubenswrapper[4520]: I0130 07:36:15.973776 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bdzk7"] Jan 30 07:36:16 crc kubenswrapper[4520]: I0130 07:36:16.696698 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae069169-ae3c-4838-b139-719c779adbd6" path="/var/lib/kubelet/pods/ae069169-ae3c-4838-b139-719c779adbd6/volumes" Jan 30 07:36:27 crc kubenswrapper[4520]: I0130 07:36:27.794482 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 07:36:27 crc kubenswrapper[4520]: I0130 07:36:27.797351 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 07:36:57 crc kubenswrapper[4520]: I0130 07:36:57.794408 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 07:36:57 crc kubenswrapper[4520]: I0130 07:36:57.796804 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 07:36:57 crc kubenswrapper[4520]: I0130 07:36:57.797494 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" Jan 30 07:36:57 crc kubenswrapper[4520]: I0130 07:36:57.799152 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5e0e6d3d22c8852b924c77449e25c4f60aadf93185a67fb78587771f3642aa6b"} pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 07:36:57 crc kubenswrapper[4520]: I0130 07:36:57.799221 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" containerID="cri-o://5e0e6d3d22c8852b924c77449e25c4f60aadf93185a67fb78587771f3642aa6b" gracePeriod=600 Jan 30 07:36:57 crc kubenswrapper[4520]: E0130 07:36:57.932484 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:36:58 crc kubenswrapper[4520]: I0130 07:36:58.227591 4520 generic.go:334] "Generic (PLEG): container finished" podID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerID="5e0e6d3d22c8852b924c77449e25c4f60aadf93185a67fb78587771f3642aa6b" exitCode=0 Jan 30 07:36:58 crc kubenswrapper[4520]: I0130 07:36:58.229864 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerDied","Data":"5e0e6d3d22c8852b924c77449e25c4f60aadf93185a67fb78587771f3642aa6b"} Jan 30 07:36:58 crc kubenswrapper[4520]: I0130 07:36:58.233634 4520 scope.go:117] "RemoveContainer" containerID="9c67b32e626e8044395dd14b425d076fe6cc0a4c2cba075a6abd7ac90514df27" Jan 30 07:36:58 crc kubenswrapper[4520]: I0130 07:36:58.234468 4520 scope.go:117] "RemoveContainer" containerID="5e0e6d3d22c8852b924c77449e25c4f60aadf93185a67fb78587771f3642aa6b" Jan 30 07:36:58 crc kubenswrapper[4520]: E0130 07:36:58.236219 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:36:59 crc kubenswrapper[4520]: I0130 07:36:59.393374 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fkc22"] Jan 30 07:36:59 crc kubenswrapper[4520]: E0130 07:36:59.394911 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae069169-ae3c-4838-b139-719c779adbd6" containerName="registry-server" Jan 30 07:36:59 crc kubenswrapper[4520]: I0130 07:36:59.395198 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae069169-ae3c-4838-b139-719c779adbd6" containerName="registry-server" Jan 30 07:36:59 crc kubenswrapper[4520]: E0130 07:36:59.395231 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae069169-ae3c-4838-b139-719c779adbd6" containerName="extract-utilities" Jan 30 07:36:59 crc kubenswrapper[4520]: I0130 07:36:59.395240 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae069169-ae3c-4838-b139-719c779adbd6" containerName="extract-utilities" Jan 30 07:36:59 crc kubenswrapper[4520]: E0130 07:36:59.395251 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae069169-ae3c-4838-b139-719c779adbd6" containerName="extract-content" Jan 30 07:36:59 crc kubenswrapper[4520]: I0130 07:36:59.395256 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae069169-ae3c-4838-b139-719c779adbd6" containerName="extract-content" Jan 30 07:36:59 crc kubenswrapper[4520]: I0130 07:36:59.396077 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae069169-ae3c-4838-b139-719c779adbd6" containerName="registry-server" Jan 30 07:36:59 crc kubenswrapper[4520]: I0130 07:36:59.398969 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fkc22" Jan 30 07:36:59 crc kubenswrapper[4520]: I0130 07:36:59.459168 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fkc22"] Jan 30 07:36:59 crc kubenswrapper[4520]: I0130 07:36:59.581027 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed10ab17-c950-4e94-8c42-f94a51e47083-utilities\") pod \"redhat-marketplace-fkc22\" (UID: \"ed10ab17-c950-4e94-8c42-f94a51e47083\") " pod="openshift-marketplace/redhat-marketplace-fkc22" Jan 30 07:36:59 crc kubenswrapper[4520]: I0130 07:36:59.581245 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed10ab17-c950-4e94-8c42-f94a51e47083-catalog-content\") pod \"redhat-marketplace-fkc22\" (UID: \"ed10ab17-c950-4e94-8c42-f94a51e47083\") " pod="openshift-marketplace/redhat-marketplace-fkc22" Jan 30 07:36:59 crc kubenswrapper[4520]: I0130 07:36:59.581366 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpgl7\" (UniqueName: \"kubernetes.io/projected/ed10ab17-c950-4e94-8c42-f94a51e47083-kube-api-access-xpgl7\") pod \"redhat-marketplace-fkc22\" (UID: \"ed10ab17-c950-4e94-8c42-f94a51e47083\") " pod="openshift-marketplace/redhat-marketplace-fkc22" Jan 30 07:36:59 crc kubenswrapper[4520]: I0130 07:36:59.682608 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed10ab17-c950-4e94-8c42-f94a51e47083-utilities\") pod \"redhat-marketplace-fkc22\" (UID: \"ed10ab17-c950-4e94-8c42-f94a51e47083\") " pod="openshift-marketplace/redhat-marketplace-fkc22" Jan 30 07:36:59 crc kubenswrapper[4520]: I0130 07:36:59.682666 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed10ab17-c950-4e94-8c42-f94a51e47083-catalog-content\") pod \"redhat-marketplace-fkc22\" (UID: \"ed10ab17-c950-4e94-8c42-f94a51e47083\") " pod="openshift-marketplace/redhat-marketplace-fkc22" Jan 30 07:36:59 crc kubenswrapper[4520]: I0130 07:36:59.682713 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpgl7\" (UniqueName: \"kubernetes.io/projected/ed10ab17-c950-4e94-8c42-f94a51e47083-kube-api-access-xpgl7\") pod \"redhat-marketplace-fkc22\" (UID: \"ed10ab17-c950-4e94-8c42-f94a51e47083\") " pod="openshift-marketplace/redhat-marketplace-fkc22" Jan 30 07:36:59 crc kubenswrapper[4520]: I0130 07:36:59.685762 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed10ab17-c950-4e94-8c42-f94a51e47083-catalog-content\") pod \"redhat-marketplace-fkc22\" (UID: \"ed10ab17-c950-4e94-8c42-f94a51e47083\") " pod="openshift-marketplace/redhat-marketplace-fkc22" Jan 30 07:36:59 crc kubenswrapper[4520]: I0130 07:36:59.686266 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed10ab17-c950-4e94-8c42-f94a51e47083-utilities\") pod \"redhat-marketplace-fkc22\" (UID: \"ed10ab17-c950-4e94-8c42-f94a51e47083\") " pod="openshift-marketplace/redhat-marketplace-fkc22" Jan 30 07:36:59 crc kubenswrapper[4520]: I0130 07:36:59.711153 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpgl7\" (UniqueName: \"kubernetes.io/projected/ed10ab17-c950-4e94-8c42-f94a51e47083-kube-api-access-xpgl7\") pod \"redhat-marketplace-fkc22\" (UID: \"ed10ab17-c950-4e94-8c42-f94a51e47083\") " pod="openshift-marketplace/redhat-marketplace-fkc22" Jan 30 07:36:59 crc kubenswrapper[4520]: I0130 07:36:59.723615 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fkc22" Jan 30 07:37:00 crc kubenswrapper[4520]: I0130 07:37:00.138670 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4k8cc"] Jan 30 07:37:00 crc kubenswrapper[4520]: I0130 07:37:00.140335 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4k8cc" Jan 30 07:37:00 crc kubenswrapper[4520]: I0130 07:37:00.158155 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4k8cc"] Jan 30 07:37:00 crc kubenswrapper[4520]: I0130 07:37:00.191399 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcf48478-d19a-4c05-999a-d0c96c6ddbec-utilities\") pod \"community-operators-4k8cc\" (UID: \"bcf48478-d19a-4c05-999a-d0c96c6ddbec\") " pod="openshift-marketplace/community-operators-4k8cc" Jan 30 07:37:00 crc kubenswrapper[4520]: I0130 07:37:00.191669 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcf48478-d19a-4c05-999a-d0c96c6ddbec-catalog-content\") pod \"community-operators-4k8cc\" (UID: \"bcf48478-d19a-4c05-999a-d0c96c6ddbec\") " pod="openshift-marketplace/community-operators-4k8cc" Jan 30 07:37:00 crc kubenswrapper[4520]: I0130 07:37:00.191709 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crw2d\" (UniqueName: \"kubernetes.io/projected/bcf48478-d19a-4c05-999a-d0c96c6ddbec-kube-api-access-crw2d\") pod \"community-operators-4k8cc\" (UID: \"bcf48478-d19a-4c05-999a-d0c96c6ddbec\") " pod="openshift-marketplace/community-operators-4k8cc" Jan 30 07:37:00 crc kubenswrapper[4520]: I0130 07:37:00.293049 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcf48478-d19a-4c05-999a-d0c96c6ddbec-catalog-content\") pod \"community-operators-4k8cc\" (UID: \"bcf48478-d19a-4c05-999a-d0c96c6ddbec\") " pod="openshift-marketplace/community-operators-4k8cc" Jan 30 07:37:00 crc kubenswrapper[4520]: I0130 07:37:00.293093 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crw2d\" (UniqueName: \"kubernetes.io/projected/bcf48478-d19a-4c05-999a-d0c96c6ddbec-kube-api-access-crw2d\") pod \"community-operators-4k8cc\" (UID: \"bcf48478-d19a-4c05-999a-d0c96c6ddbec\") " pod="openshift-marketplace/community-operators-4k8cc" Jan 30 07:37:00 crc kubenswrapper[4520]: I0130 07:37:00.293170 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcf48478-d19a-4c05-999a-d0c96c6ddbec-utilities\") pod \"community-operators-4k8cc\" (UID: \"bcf48478-d19a-4c05-999a-d0c96c6ddbec\") " pod="openshift-marketplace/community-operators-4k8cc" Jan 30 07:37:00 crc kubenswrapper[4520]: I0130 07:37:00.295854 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcf48478-d19a-4c05-999a-d0c96c6ddbec-catalog-content\") pod \"community-operators-4k8cc\" (UID: \"bcf48478-d19a-4c05-999a-d0c96c6ddbec\") " pod="openshift-marketplace/community-operators-4k8cc" Jan 30 07:37:00 crc kubenswrapper[4520]: I0130 07:37:00.296159 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcf48478-d19a-4c05-999a-d0c96c6ddbec-utilities\") pod \"community-operators-4k8cc\" (UID: \"bcf48478-d19a-4c05-999a-d0c96c6ddbec\") " pod="openshift-marketplace/community-operators-4k8cc" Jan 30 07:37:00 crc kubenswrapper[4520]: I0130 07:37:00.323731 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crw2d\" (UniqueName: \"kubernetes.io/projected/bcf48478-d19a-4c05-999a-d0c96c6ddbec-kube-api-access-crw2d\") pod \"community-operators-4k8cc\" (UID: \"bcf48478-d19a-4c05-999a-d0c96c6ddbec\") " pod="openshift-marketplace/community-operators-4k8cc" Jan 30 07:37:00 crc kubenswrapper[4520]: I0130 07:37:00.466430 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fkc22"] Jan 30 07:37:00 crc kubenswrapper[4520]: I0130 07:37:00.488708 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4k8cc" Jan 30 07:37:00 crc kubenswrapper[4520]: I0130 07:37:00.929389 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4k8cc"] Jan 30 07:37:00 crc kubenswrapper[4520]: W0130 07:37:00.932993 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbcf48478_d19a_4c05_999a_d0c96c6ddbec.slice/crio-5a271f6335e1359eaeade05236c7b02ad890da12553a56ab73431c54dc02e78e WatchSource:0}: Error finding container 5a271f6335e1359eaeade05236c7b02ad890da12553a56ab73431c54dc02e78e: Status 404 returned error can't find the container with id 5a271f6335e1359eaeade05236c7b02ad890da12553a56ab73431c54dc02e78e Jan 30 07:37:01 crc kubenswrapper[4520]: I0130 07:37:01.264457 4520 generic.go:334] "Generic (PLEG): container finished" podID="ed10ab17-c950-4e94-8c42-f94a51e47083" containerID="bfd4f5f0dd80e5e00b3fa8ba04dc954f9bc85f5451a0b709e2130f8b053559e6" exitCode=0 Jan 30 07:37:01 crc kubenswrapper[4520]: I0130 07:37:01.264984 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fkc22" event={"ID":"ed10ab17-c950-4e94-8c42-f94a51e47083","Type":"ContainerDied","Data":"bfd4f5f0dd80e5e00b3fa8ba04dc954f9bc85f5451a0b709e2130f8b053559e6"} Jan 30 07:37:01 crc kubenswrapper[4520]: I0130 07:37:01.265070 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fkc22" event={"ID":"ed10ab17-c950-4e94-8c42-f94a51e47083","Type":"ContainerStarted","Data":"7fb337208a1077ae5237dc4f2f09a3e7b1eb22138de30bf665703577261d440f"} Jan 30 07:37:01 crc kubenswrapper[4520]: I0130 07:37:01.272368 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4k8cc" event={"ID":"bcf48478-d19a-4c05-999a-d0c96c6ddbec","Type":"ContainerDied","Data":"f39e378e97affc99a0d243381ac5f01a44c83b8cd26802afe2d3743658a64224"} Jan 30 07:37:01 crc kubenswrapper[4520]: I0130 07:37:01.272206 4520 generic.go:334] "Generic (PLEG): container finished" podID="bcf48478-d19a-4c05-999a-d0c96c6ddbec" containerID="f39e378e97affc99a0d243381ac5f01a44c83b8cd26802afe2d3743658a64224" exitCode=0 Jan 30 07:37:01 crc kubenswrapper[4520]: I0130 07:37:01.273898 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4k8cc" event={"ID":"bcf48478-d19a-4c05-999a-d0c96c6ddbec","Type":"ContainerStarted","Data":"5a271f6335e1359eaeade05236c7b02ad890da12553a56ab73431c54dc02e78e"} Jan 30 07:37:02 crc kubenswrapper[4520]: I0130 07:37:02.287960 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fkc22" event={"ID":"ed10ab17-c950-4e94-8c42-f94a51e47083","Type":"ContainerStarted","Data":"76b1262a53770ab043c15c57c445e7422c14ffecfa60b5971fd6ac5f7941759b"} Jan 30 07:37:02 crc kubenswrapper[4520]: I0130 07:37:02.765358 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x4d5j"] Jan 30 07:37:02 crc kubenswrapper[4520]: I0130 07:37:02.769460 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4d5j" Jan 30 07:37:02 crc kubenswrapper[4520]: I0130 07:37:02.788138 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x4d5j"] Jan 30 07:37:02 crc kubenswrapper[4520]: I0130 07:37:02.936824 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61cc98f1-a66d-488c-a076-914ada7e8de1-utilities\") pod \"redhat-operators-x4d5j\" (UID: \"61cc98f1-a66d-488c-a076-914ada7e8de1\") " pod="openshift-marketplace/redhat-operators-x4d5j" Jan 30 07:37:02 crc kubenswrapper[4520]: I0130 07:37:02.936875 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61cc98f1-a66d-488c-a076-914ada7e8de1-catalog-content\") pod \"redhat-operators-x4d5j\" (UID: \"61cc98f1-a66d-488c-a076-914ada7e8de1\") " pod="openshift-marketplace/redhat-operators-x4d5j" Jan 30 07:37:02 crc kubenswrapper[4520]: I0130 07:37:02.937685 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vj77\" (UniqueName: \"kubernetes.io/projected/61cc98f1-a66d-488c-a076-914ada7e8de1-kube-api-access-9vj77\") pod \"redhat-operators-x4d5j\" (UID: \"61cc98f1-a66d-488c-a076-914ada7e8de1\") " pod="openshift-marketplace/redhat-operators-x4d5j" Jan 30 07:37:03 crc kubenswrapper[4520]: I0130 07:37:03.039625 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vj77\" (UniqueName: \"kubernetes.io/projected/61cc98f1-a66d-488c-a076-914ada7e8de1-kube-api-access-9vj77\") pod \"redhat-operators-x4d5j\" (UID: \"61cc98f1-a66d-488c-a076-914ada7e8de1\") " pod="openshift-marketplace/redhat-operators-x4d5j" Jan 30 07:37:03 crc kubenswrapper[4520]: I0130 07:37:03.039713 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61cc98f1-a66d-488c-a076-914ada7e8de1-utilities\") pod \"redhat-operators-x4d5j\" (UID: \"61cc98f1-a66d-488c-a076-914ada7e8de1\") " pod="openshift-marketplace/redhat-operators-x4d5j" Jan 30 07:37:03 crc kubenswrapper[4520]: I0130 07:37:03.039751 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61cc98f1-a66d-488c-a076-914ada7e8de1-catalog-content\") pod \"redhat-operators-x4d5j\" (UID: \"61cc98f1-a66d-488c-a076-914ada7e8de1\") " pod="openshift-marketplace/redhat-operators-x4d5j" Jan 30 07:37:03 crc kubenswrapper[4520]: I0130 07:37:03.042703 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61cc98f1-a66d-488c-a076-914ada7e8de1-catalog-content\") pod \"redhat-operators-x4d5j\" (UID: \"61cc98f1-a66d-488c-a076-914ada7e8de1\") " pod="openshift-marketplace/redhat-operators-x4d5j" Jan 30 07:37:03 crc kubenswrapper[4520]: I0130 07:37:03.043630 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61cc98f1-a66d-488c-a076-914ada7e8de1-utilities\") pod \"redhat-operators-x4d5j\" (UID: \"61cc98f1-a66d-488c-a076-914ada7e8de1\") " pod="openshift-marketplace/redhat-operators-x4d5j" Jan 30 07:37:03 crc kubenswrapper[4520]: I0130 07:37:03.071727 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vj77\" (UniqueName: \"kubernetes.io/projected/61cc98f1-a66d-488c-a076-914ada7e8de1-kube-api-access-9vj77\") pod \"redhat-operators-x4d5j\" (UID: \"61cc98f1-a66d-488c-a076-914ada7e8de1\") " pod="openshift-marketplace/redhat-operators-x4d5j" Jan 30 07:37:03 crc kubenswrapper[4520]: I0130 07:37:03.087905 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4d5j" Jan 30 07:37:03 crc kubenswrapper[4520]: I0130 07:37:03.296215 4520 generic.go:334] "Generic (PLEG): container finished" podID="ed10ab17-c950-4e94-8c42-f94a51e47083" containerID="76b1262a53770ab043c15c57c445e7422c14ffecfa60b5971fd6ac5f7941759b" exitCode=0 Jan 30 07:37:03 crc kubenswrapper[4520]: I0130 07:37:03.296260 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fkc22" event={"ID":"ed10ab17-c950-4e94-8c42-f94a51e47083","Type":"ContainerDied","Data":"76b1262a53770ab043c15c57c445e7422c14ffecfa60b5971fd6ac5f7941759b"} Jan 30 07:37:04 crc kubenswrapper[4520]: I0130 07:37:04.197734 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x4d5j"] Jan 30 07:37:04 crc kubenswrapper[4520]: I0130 07:37:04.307233 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fkc22" event={"ID":"ed10ab17-c950-4e94-8c42-f94a51e47083","Type":"ContainerStarted","Data":"d70002b5e920db3ed124e95b3aa0d122b204dc1f642d30e1d98996df39bc6ff1"} Jan 30 07:37:04 crc kubenswrapper[4520]: I0130 07:37:04.310469 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4d5j" event={"ID":"61cc98f1-a66d-488c-a076-914ada7e8de1","Type":"ContainerStarted","Data":"57511313b0e0029a28cbea244dfecd658bb879cb4a5d30f0adc8c3a94336800e"} Jan 30 07:37:04 crc kubenswrapper[4520]: I0130 07:37:04.361977 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fkc22" podStartSLOduration=2.848469424 podStartE2EDuration="5.359339441s" podCreationTimestamp="2026-01-30 07:36:59 +0000 UTC" firstStartedPulling="2026-01-30 07:37:01.268798706 +0000 UTC m=+3134.897150887" lastFinishedPulling="2026-01-30 07:37:03.779668722 +0000 UTC m=+3137.408020904" observedRunningTime="2026-01-30 07:37:04.351071525 +0000 UTC m=+3137.979423707" watchObservedRunningTime="2026-01-30 07:37:04.359339441 +0000 UTC m=+3137.987691621" Jan 30 07:37:05 crc kubenswrapper[4520]: I0130 07:37:05.326895 4520 generic.go:334] "Generic (PLEG): container finished" podID="61cc98f1-a66d-488c-a076-914ada7e8de1" containerID="117cdb71ac2913f2d468a016d9f8dfeb1b2e4da6872eac53b793589344ab3300" exitCode=0 Jan 30 07:37:05 crc kubenswrapper[4520]: I0130 07:37:05.326992 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4d5j" event={"ID":"61cc98f1-a66d-488c-a076-914ada7e8de1","Type":"ContainerDied","Data":"117cdb71ac2913f2d468a016d9f8dfeb1b2e4da6872eac53b793589344ab3300"} Jan 30 07:37:09 crc kubenswrapper[4520]: I0130 07:37:09.373678 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4k8cc" event={"ID":"bcf48478-d19a-4c05-999a-d0c96c6ddbec","Type":"ContainerStarted","Data":"bad53c250ad833be21dec79a9adde6548a4d5e77486cf0bdee6267c540ab4a4e"} Jan 30 07:37:09 crc kubenswrapper[4520]: I0130 07:37:09.377120 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4d5j" event={"ID":"61cc98f1-a66d-488c-a076-914ada7e8de1","Type":"ContainerStarted","Data":"8c9a553e46cfa67318ca30808c3f0aef9149580ad0651ea958c70f6b64f0659c"} Jan 30 07:37:09 crc kubenswrapper[4520]: I0130 07:37:09.724686 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fkc22" Jan 30 07:37:09 crc kubenswrapper[4520]: I0130 07:37:09.724943 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fkc22" Jan 30 07:37:11 crc kubenswrapper[4520]: I0130 07:37:11.454999 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-fkc22" podUID="ed10ab17-c950-4e94-8c42-f94a51e47083" containerName="registry-server" probeResult="failure" output=< Jan 30 07:37:11 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 07:37:11 crc kubenswrapper[4520]: > Jan 30 07:37:11 crc kubenswrapper[4520]: I0130 07:37:11.688507 4520 scope.go:117] "RemoveContainer" containerID="5e0e6d3d22c8852b924c77449e25c4f60aadf93185a67fb78587771f3642aa6b" Jan 30 07:37:11 crc kubenswrapper[4520]: E0130 07:37:11.691225 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:37:14 crc kubenswrapper[4520]: I0130 07:37:14.433762 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4k8cc" event={"ID":"bcf48478-d19a-4c05-999a-d0c96c6ddbec","Type":"ContainerDied","Data":"bad53c250ad833be21dec79a9adde6548a4d5e77486cf0bdee6267c540ab4a4e"} Jan 30 07:37:14 crc kubenswrapper[4520]: I0130 07:37:14.434463 4520 generic.go:334] "Generic (PLEG): container finished" podID="bcf48478-d19a-4c05-999a-d0c96c6ddbec" containerID="bad53c250ad833be21dec79a9adde6548a4d5e77486cf0bdee6267c540ab4a4e" exitCode=0 Jan 30 07:37:15 crc kubenswrapper[4520]: I0130 07:37:15.445232 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4k8cc" event={"ID":"bcf48478-d19a-4c05-999a-d0c96c6ddbec","Type":"ContainerStarted","Data":"b142f346e806b69a7515e851c6ad9163623374e2d96080ffb53e975bc61f6641"} Jan 30 07:37:15 crc kubenswrapper[4520]: I0130 07:37:15.472604 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4k8cc" podStartSLOduration=1.7106009599999998 podStartE2EDuration="15.472134025s" podCreationTimestamp="2026-01-30 07:37:00 +0000 UTC" firstStartedPulling="2026-01-30 07:37:01.279259273 +0000 UTC m=+3134.907611455" lastFinishedPulling="2026-01-30 07:37:15.040792339 +0000 UTC m=+3148.669144520" observedRunningTime="2026-01-30 07:37:15.471087518 +0000 UTC m=+3149.099439699" watchObservedRunningTime="2026-01-30 07:37:15.472134025 +0000 UTC m=+3149.100486207" Jan 30 07:37:19 crc kubenswrapper[4520]: I0130 07:37:19.522953 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-cxn6m" podUID="c2f02050-fdee-42d1-87c0-74104b2aa6bc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:19 crc kubenswrapper[4520]: I0130 07:37:19.522950 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-cxn6m" podUID="c2f02050-fdee-42d1-87c0-74104b2aa6bc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:19 crc kubenswrapper[4520]: I0130 07:37:19.770932 4520 patch_prober.go:28] interesting pod/oauth-openshift-6686467b65-4qb7w container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.61:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:19 crc kubenswrapper[4520]: I0130 07:37:19.772305 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" podUID="97fba751-b99c-4b44-9ffd-06e6e7344680" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.61:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:19 crc kubenswrapper[4520]: I0130 07:37:19.770953 4520 patch_prober.go:28] interesting pod/oauth-openshift-6686467b65-4qb7w container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.61:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:19 crc kubenswrapper[4520]: I0130 07:37:19.772417 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" podUID="97fba751-b99c-4b44-9ffd-06e6e7344680" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.61:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:20 crc kubenswrapper[4520]: I0130 07:37:20.299711 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-76c96b8575-pxtsl" podUID="c8e83470-7d61-4906-9351-b93815bd1c72" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.46:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:20 crc kubenswrapper[4520]: I0130 07:37:20.299865 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-76c96b8575-pxtsl" podUID="c8e83470-7d61-4906-9351-b93815bd1c72" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.46:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:20 crc kubenswrapper[4520]: I0130 07:37:20.490237 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4k8cc" Jan 30 07:37:20 crc kubenswrapper[4520]: I0130 07:37:20.490293 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4k8cc" Jan 30 07:37:20 crc kubenswrapper[4520]: I0130 07:37:20.945679 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-6968d8fdc4-9n5hv" podUID="b338bd18-f666-4648-9d7f-325d75b9592a" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.48:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:20 crc kubenswrapper[4520]: I0130 07:37:20.945913 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-6968d8fdc4-9n5hv" podUID="b338bd18-f666-4648-9d7f-325d75b9592a" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.48:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:21 crc kubenswrapper[4520]: I0130 07:37:21.307108 4520 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:21 crc kubenswrapper[4520]: I0130 07:37:21.307700 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:21 crc kubenswrapper[4520]: I0130 07:37:21.437559 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-ld6tp" podUID="440b0b7d-713b-4590-ad35-05fa9d42423a" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:21 crc kubenswrapper[4520]: I0130 07:37:21.450177 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-ld6tp" podUID="440b0b7d-713b-4590-ad35-05fa9d42423a" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:21 crc kubenswrapper[4520]: I0130 07:37:21.450348 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kp8f6" podUID="6ab13d5a-1ba0-4181-ae7b-69ed90c1793e" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:21 crc kubenswrapper[4520]: I0130 07:37:21.450386 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-ld6tp" podUID="440b0b7d-713b-4590-ad35-05fa9d42423a" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:21 crc kubenswrapper[4520]: I0130 07:37:21.450410 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kp8f6" podUID="6ab13d5a-1ba0-4181-ae7b-69ed90c1793e" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:21 crc kubenswrapper[4520]: I0130 07:37:21.959943 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="0f6edd3b-e0fe-4d2b-9e68-912425c0128e" containerName="galera" probeResult="failure" output="command timed out" Jan 30 07:37:21 crc kubenswrapper[4520]: I0130 07:37:21.961177 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="0f6edd3b-e0fe-4d2b-9e68-912425c0128e" containerName="galera" probeResult="failure" output="command timed out" Jan 30 07:37:22 crc kubenswrapper[4520]: I0130 07:37:22.391698 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-rr7cw" podUID="2ad2dd3f-550a-483f-84c0-d3c9a7477c5b" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:22 crc kubenswrapper[4520]: I0130 07:37:22.432725 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-rr7cw" podUID="2ad2dd3f-550a-483f-84c0-d3c9a7477c5b" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:22 crc kubenswrapper[4520]: I0130 07:37:22.608672 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-df2r7" podUID="d34fd2f5-b868-4eb8-9708-48b5e31e1397" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.53:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:22 crc kubenswrapper[4520]: I0130 07:37:22.608778 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-df2r7" podUID="d34fd2f5-b868-4eb8-9708-48b5e31e1397" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.53:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:22 crc kubenswrapper[4520]: I0130 07:37:22.735536 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-6hlhp" podUID="fbf504d7-8829-43eb-983a-e7be0f5929ac" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.57:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:22 crc kubenswrapper[4520]: I0130 07:37:22.736319 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-6hlhp" podUID="fbf504d7-8829-43eb-983a-e7be0f5929ac" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.57:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:22 crc kubenswrapper[4520]: I0130 07:37:22.848388 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-fkc22" podUID="ed10ab17-c950-4e94-8c42-f94a51e47083" containerName="registry-server" probeResult="failure" output=< Jan 30 07:37:22 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 07:37:22 crc kubenswrapper[4520]: > Jan 30 07:37:22 crc kubenswrapper[4520]: I0130 07:37:22.851653 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-4k8cc" podUID="bcf48478-d19a-4c05-999a-d0c96c6ddbec" containerName="registry-server" probeResult="failure" output=< Jan 30 07:37:22 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 07:37:22 crc kubenswrapper[4520]: > Jan 30 07:37:22 crc kubenswrapper[4520]: I0130 07:37:22.982774 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-sws5x" podUID="6bb8d69e-cfd3-4d0f-9c93-53716539e927" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:22 crc kubenswrapper[4520]: I0130 07:37:22.982810 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-52h27" podUID="cd83993b-94e2-438b-9f19-8179f70b4a0e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:23 crc kubenswrapper[4520]: I0130 07:37:23.064811 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-52h27" podUID="cd83993b-94e2-438b-9f19-8179f70b4a0e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:23 crc kubenswrapper[4520]: I0130 07:37:23.064867 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-sws5x" podUID="6bb8d69e-cfd3-4d0f-9c93-53716539e927" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:23 crc kubenswrapper[4520]: I0130 07:37:23.065004 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-9sjbr" podUID="255320d6-1503-4351-ad06-7794cbbdd120" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:23 crc kubenswrapper[4520]: I0130 07:37:23.229772 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rm676" podUID="80a81bc2-ebfd-4fa9-80ed-ddb70fb32677" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:23 crc kubenswrapper[4520]: I0130 07:37:23.229868 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2t4v6" podUID="06701a52-2501-4045-b254-90b886c11b47" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:23 crc kubenswrapper[4520]: I0130 07:37:23.229772 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2t4v6" podUID="06701a52-2501-4045-b254-90b886c11b47" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:23 crc kubenswrapper[4520]: I0130 07:37:23.229922 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rm676" podUID="80a81bc2-ebfd-4fa9-80ed-ddb70fb32677" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:23 crc kubenswrapper[4520]: I0130 07:37:23.230243 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-9sjbr" podUID="255320d6-1503-4351-ad06-7794cbbdd120" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:23 crc kubenswrapper[4520]: I0130 07:37:23.230283 4520 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-b9tbv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.62:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:23 crc kubenswrapper[4520]: I0130 07:37:23.230320 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-b9tbv" podUID="8a370c00-eeac-4281-8793-33a8c2d4b9e2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.62:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:23 crc kubenswrapper[4520]: I0130 07:37:23.230285 4520 patch_prober.go:28] interesting pod/nmstate-webhook-8474b5b9d8-6h87w container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.16:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:23 crc kubenswrapper[4520]: I0130 07:37:23.230357 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6h87w" podUID="f16e0121-e604-4297-8068-53389b66f567" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.16:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:23 crc kubenswrapper[4520]: I0130 07:37:23.559669 4520 patch_prober.go:28] interesting pod/console-5698ddd759-pv9lh container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:23 crc kubenswrapper[4520]: I0130 07:37:23.559679 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-564965969-6ksdm" podUID="45285265-5fe0-4c19-a169-fe2598b27a5d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:23 crc kubenswrapper[4520]: I0130 07:37:23.559731 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-5698ddd759-pv9lh" podUID="1117f9de-e43c-4012-8d6d-1d975e62a4cb" containerName="console" probeResult="failure" output="Get \"https://10.217.0.32:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:23 crc kubenswrapper[4520]: I0130 07:37:23.559924 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-564965969-6ksdm" podUID="45285265-5fe0-4c19-a169-fe2598b27a5d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:23 crc kubenswrapper[4520]: I0130 07:37:23.624116 4520 patch_prober.go:28] interesting pod/console-operator-58897d9998-w7xl2 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:23 crc kubenswrapper[4520]: I0130 07:37:23.624157 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-w7xl2" podUID="23b08d0a-4aa5-43be-a498-55e54d6e8c31" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:23 crc kubenswrapper[4520]: I0130 07:37:23.624209 4520 patch_prober.go:28] interesting pod/console-operator-58897d9998-w7xl2 container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:23 crc kubenswrapper[4520]: I0130 07:37:23.624221 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-w7xl2" podUID="23b08d0a-4aa5-43be-a498-55e54d6e8c31" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:23 crc kubenswrapper[4520]: I0130 07:37:23.630694 4520 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-dqjws container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:23 crc kubenswrapper[4520]: I0130 07:37:23.630748 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-dqjws" podUID="22d49062-540d-414e-b0c6-2c20d411fa71" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:23 crc kubenswrapper[4520]: I0130 07:37:23.965326 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa" containerName="galera" probeResult="failure" output="command timed out" Jan 30 07:37:23 crc kubenswrapper[4520]: I0130 07:37:23.965450 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa" containerName="galera" probeResult="failure" output="command timed out" Jan 30 07:37:23 crc kubenswrapper[4520]: I0130 07:37:23.967259 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="9df01147-3505-4e88-b91c-671e2149ab19" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 30 07:37:24 crc kubenswrapper[4520]: I0130 07:37:24.552660 4520 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-kcrth container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:24 crc kubenswrapper[4520]: I0130 07:37:24.552717 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" podUID="86dea262-c989-43a8-ae6e-e744012a5e07" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:24 crc kubenswrapper[4520]: I0130 07:37:24.552759 4520 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-kcrth container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:24 crc kubenswrapper[4520]: I0130 07:37:24.552813 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" podUID="86dea262-c989-43a8-ae6e-e744012a5e07" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:24 crc kubenswrapper[4520]: I0130 07:37:24.552670 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79955696d6-jfrp7" podUID="cdf5fc79-647e-4d70-8785-682d7f27ce10" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.58:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:24 crc kubenswrapper[4520]: I0130 07:37:24.634817 4520 patch_prober.go:28] interesting pod/router-default-5444994796-z67kf container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:24 crc kubenswrapper[4520]: I0130 07:37:24.634889 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-z67kf" podUID="a7229bd1-5891-4654-ad14-c0efed77e9b7" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:24 crc kubenswrapper[4520]: I0130 07:37:24.635835 4520 patch_prober.go:28] interesting pod/router-default-5444994796-z67kf container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:24 crc kubenswrapper[4520]: I0130 07:37:24.635858 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-z67kf" podUID="a7229bd1-5891-4654-ad14-c0efed77e9b7" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:25 crc kubenswrapper[4520]: I0130 07:37:25.636546 4520 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:25 crc kubenswrapper[4520]: I0130 07:37:25.636650 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:26 crc kubenswrapper[4520]: I0130 07:37:26.694460 4520 scope.go:117] "RemoveContainer" containerID="5e0e6d3d22c8852b924c77449e25c4f60aadf93185a67fb78587771f3642aa6b" Jan 30 07:37:26 crc kubenswrapper[4520]: E0130 07:37:26.697231 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:37:26 crc kubenswrapper[4520]: I0130 07:37:26.871220 4520 patch_prober.go:28] interesting pod/controller-manager-7f8cd9cf7d-bdgpj container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:26 crc kubenswrapper[4520]: I0130 07:37:26.871225 4520 patch_prober.go:28] interesting pod/route-controller-manager-864b9b6b9d-wjphz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:26 crc kubenswrapper[4520]: I0130 07:37:26.871215 4520 patch_prober.go:28] interesting pod/route-controller-manager-864b9b6b9d-wjphz container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:26 crc kubenswrapper[4520]: I0130 07:37:26.871225 4520 patch_prober.go:28] interesting pod/controller-manager-7f8cd9cf7d-bdgpj container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:26 crc kubenswrapper[4520]: I0130 07:37:26.873100 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-864b9b6b9d-wjphz" podUID="29f74dba-e0dc-4507-9bb9-97664a2839c9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:26 crc kubenswrapper[4520]: I0130 07:37:26.873096 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7f8cd9cf7d-bdgpj" podUID="7096caef-a90c-4c67-bb72-972e1415d8c2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:26 crc kubenswrapper[4520]: I0130 07:37:26.873101 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7f8cd9cf7d-bdgpj" podUID="7096caef-a90c-4c67-bb72-972e1415d8c2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:26 crc kubenswrapper[4520]: I0130 07:37:26.873158 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-864b9b6b9d-wjphz" podUID="29f74dba-e0dc-4507-9bb9-97664a2839c9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:28 crc kubenswrapper[4520]: I0130 07:37:28.607019 4520 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-rn9s4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:28 crc kubenswrapper[4520]: I0130 07:37:28.608396 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" podUID="4a3be9f1-bd40-4667-bdd7-2cf23292fab5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:28 crc kubenswrapper[4520]: I0130 07:37:28.608286 4520 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-rn9s4 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:28 crc kubenswrapper[4520]: I0130 07:37:28.608446 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" podUID="4a3be9f1-bd40-4667-bdd7-2cf23292fab5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:28 crc kubenswrapper[4520]: I0130 07:37:28.915715 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq" podUID="3099544c-3b89-415c-aea6-f56b7581a803" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:29 crc kubenswrapper[4520]: I0130 07:37:29.444730 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-cxn6m" podUID="c2f02050-fdee-42d1-87c0-74104b2aa6bc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:29 crc kubenswrapper[4520]: I0130 07:37:29.770099 4520 patch_prober.go:28] interesting pod/oauth-openshift-6686467b65-4qb7w container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.61:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:29 crc kubenswrapper[4520]: I0130 07:37:29.770408 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" podUID="97fba751-b99c-4b44-9ffd-06e6e7344680" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.61:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:29 crc kubenswrapper[4520]: I0130 07:37:29.771603 4520 patch_prober.go:28] interesting pod/oauth-openshift-6686467b65-4qb7w container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.61:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:29 crc kubenswrapper[4520]: I0130 07:37:29.771655 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" podUID="97fba751-b99c-4b44-9ffd-06e6e7344680" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.61:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:29 crc kubenswrapper[4520]: I0130 07:37:29.941015 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-cr54l" podUID="8c17950d-e37b-477d-87d9-d3a92b487ff3" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 07:37:30 crc kubenswrapper[4520]: I0130 07:37:30.299673 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-76c96b8575-pxtsl" podUID="c8e83470-7d61-4906-9351-b93815bd1c72" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.46:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:30 crc kubenswrapper[4520]: I0130 07:37:30.299688 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-76c96b8575-pxtsl" podUID="c8e83470-7d61-4906-9351-b93815bd1c72" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.46:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:30 crc kubenswrapper[4520]: I0130 07:37:30.636287 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4d5j" event={"ID":"61cc98f1-a66d-488c-a076-914ada7e8de1","Type":"ContainerDied","Data":"8c9a553e46cfa67318ca30808c3f0aef9149580ad0651ea958c70f6b64f0659c"} Jan 30 07:37:30 crc kubenswrapper[4520]: I0130 07:37:30.636283 4520 generic.go:334] "Generic (PLEG): container finished" podID="61cc98f1-a66d-488c-a076-914ada7e8de1" containerID="8c9a553e46cfa67318ca30808c3f0aef9149580ad0651ea958c70f6b64f0659c" exitCode=0 Jan 30 07:37:30 crc kubenswrapper[4520]: I0130 07:37:30.944663 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-6968d8fdc4-9n5hv" podUID="b338bd18-f666-4648-9d7f-325d75b9592a" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.48:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:30 crc kubenswrapper[4520]: I0130 07:37:30.945153 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-6968d8fdc4-9n5hv" podUID="b338bd18-f666-4648-9d7f-325d75b9592a" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.48:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:30 crc kubenswrapper[4520]: I0130 07:37:30.960624 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-cr54l" podUID="8c17950d-e37b-477d-87d9-d3a92b487ff3" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 07:37:31 crc kubenswrapper[4520]: I0130 07:37:31.052869 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-xdqs6" podUID="7d581456-1ad4-4ae7-90c6-00b61382b16a" containerName="registry-server" probeResult="failure" output=< Jan 30 07:37:31 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 07:37:31 crc kubenswrapper[4520]: > Jan 30 07:37:31 crc kubenswrapper[4520]: I0130 07:37:31.052878 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-xdqs6" podUID="7d581456-1ad4-4ae7-90c6-00b61382b16a" containerName="registry-server" probeResult="failure" output=< Jan 30 07:37:31 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 07:37:31 crc kubenswrapper[4520]: > Jan 30 07:37:31 crc kubenswrapper[4520]: I0130 07:37:31.307038 4520 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:31 crc kubenswrapper[4520]: I0130 07:37:31.307099 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:31 crc kubenswrapper[4520]: I0130 07:37:31.429612 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-ld6tp" podUID="440b0b7d-713b-4590-ad35-05fa9d42423a" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:31 crc kubenswrapper[4520]: I0130 07:37:31.470671 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-ld6tp" podUID="440b0b7d-713b-4590-ad35-05fa9d42423a" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:31 crc kubenswrapper[4520]: I0130 07:37:31.555683 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-ld6tp" podUID="440b0b7d-713b-4590-ad35-05fa9d42423a" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:31 crc kubenswrapper[4520]: I0130 07:37:31.555704 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kp8f6" podUID="6ab13d5a-1ba0-4181-ae7b-69ed90c1793e" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:31 crc kubenswrapper[4520]: I0130 07:37:31.556732 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kp8f6" podUID="6ab13d5a-1ba0-4181-ae7b-69ed90c1793e" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:31 crc kubenswrapper[4520]: I0130 07:37:31.652682 4520 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-rn9s4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:31 crc kubenswrapper[4520]: I0130 07:37:31.652747 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" podUID="4a3be9f1-bd40-4667-bdd7-2cf23292fab5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:31 crc kubenswrapper[4520]: I0130 07:37:31.652682 4520 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-rn9s4 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:31 crc kubenswrapper[4520]: I0130 07:37:31.652990 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" podUID="4a3be9f1-bd40-4667-bdd7-2cf23292fab5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:31 crc kubenswrapper[4520]: I0130 07:37:31.950055 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-fkc22" podUID="ed10ab17-c950-4e94-8c42-f94a51e47083" containerName="registry-server" probeResult="failure" output=< Jan 30 07:37:31 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 07:37:31 crc kubenswrapper[4520]: > Jan 30 07:37:31 crc kubenswrapper[4520]: I0130 07:37:31.955714 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-4k8cc" podUID="bcf48478-d19a-4c05-999a-d0c96c6ddbec" containerName="registry-server" probeResult="failure" output=< Jan 30 07:37:31 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 07:37:31 crc kubenswrapper[4520]: > Jan 30 07:37:32 crc kubenswrapper[4520]: I0130 07:37:32.432611 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-rr7cw" podUID="2ad2dd3f-550a-483f-84c0-d3c9a7477c5b" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:32 crc kubenswrapper[4520]: I0130 07:37:32.432674 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-rr7cw" podUID="2ad2dd3f-550a-483f-84c0-d3c9a7477c5b" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:32 crc kubenswrapper[4520]: I0130 07:37:32.577684 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-qklbf" podUID="dda4dad2-f4d8-494e-9c59-28413625eb1d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.54:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:32 crc kubenswrapper[4520]: I0130 07:37:32.634687 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gms89" podUID="7ac2569b-0787-4f14-9039-a7541c6123e6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.56:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:32 crc kubenswrapper[4520]: I0130 07:37:32.664291 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4d5j" event={"ID":"61cc98f1-a66d-488c-a076-914ada7e8de1","Type":"ContainerStarted","Data":"d03f0ddb9c8e4d017f583b0bb490822f4a0a16f046cb6e323287f9723e71446f"} Jan 30 07:37:32 crc kubenswrapper[4520]: I0130 07:37:32.679845 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-6hlhp" podUID="fbf504d7-8829-43eb-983a-e7be0f5929ac" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.57:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:32 crc kubenswrapper[4520]: I0130 07:37:32.935727 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-sws5x" podUID="6bb8d69e-cfd3-4d0f-9c93-53716539e927" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:32 crc kubenswrapper[4520]: I0130 07:37:32.955728 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="0f6edd3b-e0fe-4d2b-9e68-912425c0128e" containerName="galera" probeResult="failure" output="command timed out" Jan 30 07:37:32 crc kubenswrapper[4520]: I0130 07:37:32.957455 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="0f6edd3b-e0fe-4d2b-9e68-912425c0128e" containerName="galera" probeResult="failure" output="command timed out" Jan 30 07:37:32 crc kubenswrapper[4520]: I0130 07:37:32.976655 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-9sjbr" podUID="255320d6-1503-4351-ad06-7794cbbdd120" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:33 crc kubenswrapper[4520]: I0130 07:37:33.088927 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x4d5j" Jan 30 07:37:33 crc kubenswrapper[4520]: I0130 07:37:33.088988 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x4d5j" Jan 30 07:37:33 crc kubenswrapper[4520]: I0130 07:37:33.145691 4520 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-b9tbv container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.62:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:33 crc kubenswrapper[4520]: I0130 07:37:33.147333 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-b9tbv" podUID="8a370c00-eeac-4281-8793-33a8c2d4b9e2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.62:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:33 crc kubenswrapper[4520]: I0130 07:37:33.145976 4520 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-b9tbv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.62:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:33 crc kubenswrapper[4520]: I0130 07:37:33.148001 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-b9tbv" podUID="8a370c00-eeac-4281-8793-33a8c2d4b9e2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.62:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:33 crc kubenswrapper[4520]: I0130 07:37:33.629799 4520 patch_prober.go:28] interesting pod/console-operator-58897d9998-w7xl2 container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:33 crc kubenswrapper[4520]: I0130 07:37:33.633024 4520 patch_prober.go:28] interesting pod/console-operator-58897d9998-w7xl2 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:33 crc kubenswrapper[4520]: I0130 07:37:33.642983 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-w7xl2" podUID="23b08d0a-4aa5-43be-a498-55e54d6e8c31" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:33 crc kubenswrapper[4520]: I0130 07:37:33.655723 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-w7xl2" podUID="23b08d0a-4aa5-43be-a498-55e54d6e8c31" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:33 crc kubenswrapper[4520]: I0130 07:37:33.673653 4520 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-dqjws container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:33 crc kubenswrapper[4520]: I0130 07:37:33.673694 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-dqjws" podUID="22d49062-540d-414e-b0c6-2c20d411fa71" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:33 crc kubenswrapper[4520]: I0130 07:37:33.956758 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa" containerName="galera" probeResult="failure" output="command timed out" Jan 30 07:37:33 crc kubenswrapper[4520]: I0130 07:37:33.956774 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa" containerName="galera" probeResult="failure" output="command timed out" Jan 30 07:37:33 crc kubenswrapper[4520]: I0130 07:37:33.960187 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="9df01147-3505-4e88-b91c-671e2149ab19" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Jan 30 07:37:34 crc kubenswrapper[4520]: I0130 07:37:34.192701 4520 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qln6b container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:34 crc kubenswrapper[4520]: I0130 07:37:34.192761 4520 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qln6b container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:34 crc kubenswrapper[4520]: I0130 07:37:34.192777 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qln6b" podUID="ba04cf12-8677-4024-9c2c-618dfc096d4d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:34 crc kubenswrapper[4520]: I0130 07:37:34.192821 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qln6b" podUID="ba04cf12-8677-4024-9c2c-618dfc096d4d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:34 crc kubenswrapper[4520]: I0130 07:37:34.287211 4520 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-bjb69 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:34 crc kubenswrapper[4520]: I0130 07:37:34.287284 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjb69" podUID="5dfff538-11e7-4c6b-9db0-c26e2f6b6140" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:34 crc kubenswrapper[4520]: I0130 07:37:34.546788 4520 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-nc9qp container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:34 crc kubenswrapper[4520]: I0130 07:37:34.546852 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nc9qp" podUID="82561e0e-8f14-4e88-adbb-b0a2b3d8760c" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:34 crc kubenswrapper[4520]: I0130 07:37:34.587686 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79955696d6-jfrp7" podUID="cdf5fc79-647e-4d70-8785-682d7f27ce10" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.58:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:34 crc kubenswrapper[4520]: I0130 07:37:34.629644 4520 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-kcrth container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:34 crc kubenswrapper[4520]: I0130 07:37:34.629669 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-79955696d6-jfrp7" podUID="cdf5fc79-647e-4d70-8785-682d7f27ce10" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.58:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:34 crc kubenswrapper[4520]: I0130 07:37:34.629707 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" podUID="86dea262-c989-43a8-ae6e-e744012a5e07" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:34 crc kubenswrapper[4520]: I0130 07:37:34.629883 4520 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-kcrth container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:34 crc kubenswrapper[4520]: I0130 07:37:34.629982 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" podUID="86dea262-c989-43a8-ae6e-e744012a5e07" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:34 crc kubenswrapper[4520]: I0130 07:37:34.712768 4520 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-nc9qp container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:34 crc kubenswrapper[4520]: I0130 07:37:34.712823 4520 patch_prober.go:28] interesting pod/router-default-5444994796-z67kf container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:34 crc kubenswrapper[4520]: I0130 07:37:34.712843 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nc9qp" podUID="82561e0e-8f14-4e88-adbb-b0a2b3d8760c" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:34 crc kubenswrapper[4520]: I0130 07:37:34.712870 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-z67kf" podUID="a7229bd1-5891-4654-ad14-c0efed77e9b7" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:34 crc kubenswrapper[4520]: I0130 07:37:34.795758 4520 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-rn9s4 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:34 crc kubenswrapper[4520]: I0130 07:37:34.795793 4520 patch_prober.go:28] interesting pod/router-default-5444994796-z67kf container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:34 crc kubenswrapper[4520]: I0130 07:37:34.795835 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" podUID="4a3be9f1-bd40-4667-bdd7-2cf23292fab5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:34 crc kubenswrapper[4520]: I0130 07:37:34.795876 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-z67kf" podUID="a7229bd1-5891-4654-ad14-c0efed77e9b7" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:34 crc kubenswrapper[4520]: I0130 07:37:34.795891 4520 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-rn9s4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:34 crc kubenswrapper[4520]: I0130 07:37:34.795959 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" podUID="4a3be9f1-bd40-4667-bdd7-2cf23292fab5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:34 crc kubenswrapper[4520]: I0130 07:37:34.796877 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" Jan 30 07:37:34 crc kubenswrapper[4520]: I0130 07:37:34.796963 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" Jan 30 07:37:34 crc kubenswrapper[4520]: I0130 07:37:34.799891 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"9a92ed05126d510ec5a530553d10693808717b1cc3b29e7303d9aa7976089b5b"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Jan 30 07:37:34 crc kubenswrapper[4520]: I0130 07:37:34.799956 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" podUID="4a3be9f1-bd40-4667-bdd7-2cf23292fab5" containerName="openshift-config-operator" containerID="cri-o://9a92ed05126d510ec5a530553d10693808717b1cc3b29e7303d9aa7976089b5b" gracePeriod=30 Jan 30 07:37:35 crc kubenswrapper[4520]: I0130 07:37:35.443326 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" Jan 30 07:37:35 crc kubenswrapper[4520]: I0130 07:37:35.458045 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x4d5j" podUID="61cc98f1-a66d-488c-a076-914ada7e8de1" containerName="registry-server" probeResult="failure" output=< Jan 30 07:37:35 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 07:37:35 crc kubenswrapper[4520]: > Jan 30 07:37:35 crc kubenswrapper[4520]: I0130 07:37:35.639118 4520 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:35 crc kubenswrapper[4520]: I0130 07:37:35.639182 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:36 crc kubenswrapper[4520]: I0130 07:37:36.610791 4520 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-rn9s4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 30 07:37:36 crc kubenswrapper[4520]: I0130 07:37:36.614653 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" podUID="4a3be9f1-bd40-4667-bdd7-2cf23292fab5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 30 07:37:36 crc kubenswrapper[4520]: I0130 07:37:36.790414 4520 patch_prober.go:28] interesting pod/route-controller-manager-864b9b6b9d-wjphz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:36 crc kubenswrapper[4520]: I0130 07:37:36.790424 4520 patch_prober.go:28] interesting pod/route-controller-manager-864b9b6b9d-wjphz container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:36 crc kubenswrapper[4520]: I0130 07:37:36.790614 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-864b9b6b9d-wjphz" podUID="29f74dba-e0dc-4507-9bb9-97664a2839c9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:36 crc kubenswrapper[4520]: I0130 07:37:36.790499 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-864b9b6b9d-wjphz" podUID="29f74dba-e0dc-4507-9bb9-97664a2839c9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:36 crc kubenswrapper[4520]: I0130 07:37:36.795237 4520 patch_prober.go:28] interesting pod/controller-manager-7f8cd9cf7d-bdgpj container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:36 crc kubenswrapper[4520]: I0130 07:37:36.795266 4520 patch_prober.go:28] interesting pod/controller-manager-7f8cd9cf7d-bdgpj container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:36 crc kubenswrapper[4520]: I0130 07:37:36.795450 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7f8cd9cf7d-bdgpj" podUID="7096caef-a90c-4c67-bb72-972e1415d8c2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:36 crc kubenswrapper[4520]: I0130 07:37:36.795329 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7f8cd9cf7d-bdgpj" podUID="7096caef-a90c-4c67-bb72-972e1415d8c2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:37 crc kubenswrapper[4520]: I0130 07:37:37.967180 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="2790b738-6242-4208-a94f-be166868cc43" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.211:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:37 crc kubenswrapper[4520]: I0130 07:37:37.967190 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="2790b738-6242-4208-a94f-be166868cc43" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.211:8081/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:38 crc kubenswrapper[4520]: I0130 07:37:38.953712 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq" podUID="3099544c-3b89-415c-aea6-f56b7581a803" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:38 crc kubenswrapper[4520]: I0130 07:37:38.953727 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7l8nzq" podUID="3099544c-3b89-415c-aea6-f56b7581a803" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:38 crc kubenswrapper[4520]: I0130 07:37:38.960076 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="9df01147-3505-4e88-b91c-671e2149ab19" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 30 07:37:39 crc kubenswrapper[4520]: I0130 07:37:39.520717 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-cxn6m" podUID="c2f02050-fdee-42d1-87c0-74104b2aa6bc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:39 crc kubenswrapper[4520]: I0130 07:37:39.520948 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-cxn6m" podUID="c2f02050-fdee-42d1-87c0-74104b2aa6bc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:39 crc kubenswrapper[4520]: I0130 07:37:39.523481 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-cxn6m" Jan 30 07:37:39 crc kubenswrapper[4520]: I0130 07:37:39.606625 4520 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-rn9s4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 30 07:37:39 crc kubenswrapper[4520]: I0130 07:37:39.608040 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" podUID="4a3be9f1-bd40-4667-bdd7-2cf23292fab5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 30 07:37:39 crc kubenswrapper[4520]: I0130 07:37:39.771346 4520 patch_prober.go:28] interesting pod/oauth-openshift-6686467b65-4qb7w container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.61:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:39 crc kubenswrapper[4520]: I0130 07:37:39.771350 4520 patch_prober.go:28] interesting pod/oauth-openshift-6686467b65-4qb7w container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.61:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:39 crc kubenswrapper[4520]: I0130 07:37:39.771666 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" podUID="97fba751-b99c-4b44-9ffd-06e6e7344680" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.61:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:39 crc kubenswrapper[4520]: I0130 07:37:39.771721 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 07:37:39 crc kubenswrapper[4520]: I0130 07:37:39.771715 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" podUID="97fba751-b99c-4b44-9ffd-06e6e7344680" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.61:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:39 crc kubenswrapper[4520]: I0130 07:37:39.771860 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 07:37:39 crc kubenswrapper[4520]: I0130 07:37:39.776088 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="oauth-openshift" containerStatusID={"Type":"cri-o","ID":"c2ee130e090a1059087b7cef46dc305bc7d33cf7086869d981652ca11e954d88"} pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" containerMessage="Container oauth-openshift failed liveness probe, will be restarted" Jan 30 07:37:40 crc kubenswrapper[4520]: I0130 07:37:40.258751 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-76c96b8575-pxtsl" podUID="c8e83470-7d61-4906-9351-b93815bd1c72" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.46:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:40 crc kubenswrapper[4520]: I0130 07:37:40.260221 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/metallb-operator-webhook-server-76c96b8575-pxtsl" Jan 30 07:37:40 crc kubenswrapper[4520]: I0130 07:37:40.261926 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="webhook-server" containerStatusID={"Type":"cri-o","ID":"987a6e18038f21370330633d1417c93816846a0ba1c3d1a8cbef615ab23848fc"} pod="metallb-system/metallb-operator-webhook-server-76c96b8575-pxtsl" containerMessage="Container webhook-server failed liveness probe, will be restarted" Jan 30 07:37:40 crc kubenswrapper[4520]: I0130 07:37:40.261978 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/metallb-operator-webhook-server-76c96b8575-pxtsl" podUID="c8e83470-7d61-4906-9351-b93815bd1c72" containerName="webhook-server" containerID="cri-o://987a6e18038f21370330633d1417c93816846a0ba1c3d1a8cbef615ab23848fc" gracePeriod=2 Jan 30 07:37:40 crc kubenswrapper[4520]: E0130 07:37:40.271876 4520 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a3be9f1_bd40_4667_bdd7_2cf23292fab5.slice/crio-9a92ed05126d510ec5a530553d10693808717b1cc3b29e7303d9aa7976089b5b.scope\": RecentStats: unable to find data in memory cache]" Jan 30 07:37:40 crc kubenswrapper[4520]: I0130 07:37:40.299670 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-76c96b8575-pxtsl" podUID="c8e83470-7d61-4906-9351-b93815bd1c72" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.46:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:40 crc kubenswrapper[4520]: I0130 07:37:40.299781 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-76c96b8575-pxtsl" Jan 30 07:37:40 crc kubenswrapper[4520]: I0130 07:37:40.299825 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-cxn6m" Jan 30 07:37:40 crc kubenswrapper[4520]: I0130 07:37:40.772861 4520 patch_prober.go:28] interesting pod/oauth-openshift-6686467b65-4qb7w container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.61:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:40 crc kubenswrapper[4520]: I0130 07:37:40.773151 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" podUID="97fba751-b99c-4b44-9ffd-06e6e7344680" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.61:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:40 crc kubenswrapper[4520]: I0130 07:37:40.811565 4520 generic.go:334] "Generic (PLEG): container finished" podID="4a3be9f1-bd40-4667-bdd7-2cf23292fab5" containerID="9a92ed05126d510ec5a530553d10693808717b1cc3b29e7303d9aa7976089b5b" exitCode=0 Jan 30 07:37:40 crc kubenswrapper[4520]: I0130 07:37:40.811615 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" event={"ID":"4a3be9f1-bd40-4667-bdd7-2cf23292fab5","Type":"ContainerDied","Data":"9a92ed05126d510ec5a530553d10693808717b1cc3b29e7303d9aa7976089b5b"} Jan 30 07:37:40 crc kubenswrapper[4520]: I0130 07:37:40.944756 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-6968d8fdc4-9n5hv" podUID="b338bd18-f666-4648-9d7f-325d75b9592a" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.48:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:40 crc kubenswrapper[4520]: I0130 07:37:40.944867 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/controller-6968d8fdc4-9n5hv" Jan 30 07:37:40 crc kubenswrapper[4520]: I0130 07:37:40.945234 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-6968d8fdc4-9n5hv" podUID="b338bd18-f666-4648-9d7f-325d75b9592a" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.48:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:40 crc kubenswrapper[4520]: I0130 07:37:40.946567 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-9n5hv" Jan 30 07:37:40 crc kubenswrapper[4520]: I0130 07:37:40.946823 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller" containerStatusID={"Type":"cri-o","ID":"70dae541a6d8c35f47acf939f10a85da287ff03acc5b8b13552b38765f71cc5d"} pod="metallb-system/controller-6968d8fdc4-9n5hv" containerMessage="Container controller failed liveness probe, will be restarted" Jan 30 07:37:40 crc kubenswrapper[4520]: I0130 07:37:40.947709 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/controller-6968d8fdc4-9n5hv" podUID="b338bd18-f666-4648-9d7f-325d75b9592a" containerName="controller" containerID="cri-o://70dae541a6d8c35f47acf939f10a85da287ff03acc5b8b13552b38765f71cc5d" gracePeriod=2 Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.306841 4520 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.307312 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.307392 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.308844 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-scheduler" containerStatusID={"Type":"cri-o","ID":"20f365e319337b1d1c71d80b5631c2264c907a4b8c06d78c1e1c2ed64915fdfb"} pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" containerMessage="Container kube-scheduler failed liveness probe, will be restarted" Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.308946 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" containerID="cri-o://20f365e319337b1d1c71d80b5631c2264c907a4b8c06d78c1e1c2ed64915fdfb" gracePeriod=30 Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.356326 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-xdqs6" podUID="7d581456-1ad4-4ae7-90c6-00b61382b16a" containerName="registry-server" probeResult="failure" output=< Jan 30 07:37:41 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 07:37:41 crc kubenswrapper[4520]: > Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.357717 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-xdqs6" podUID="7d581456-1ad4-4ae7-90c6-00b61382b16a" containerName="registry-server" probeResult="failure" output=< Jan 30 07:37:41 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 07:37:41 crc kubenswrapper[4520]: > Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.472745 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-ld6tp" podUID="440b0b7d-713b-4590-ad35-05fa9d42423a" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.472840 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-ld6tp" Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.474096 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr" containerStatusID={"Type":"cri-o","ID":"25db906ed138ad8b810f14764be335cf5189012dd21914199e259c7e791b13d8"} pod="metallb-system/frr-k8s-ld6tp" containerMessage="Container frr failed liveness probe, will be restarted" Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.474203 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-ld6tp" podUID="440b0b7d-713b-4590-ad35-05fa9d42423a" containerName="frr" containerID="cri-o://25db906ed138ad8b810f14764be335cf5189012dd21914199e259c7e791b13d8" gracePeriod=2 Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.513667 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kp8f6" podUID="6ab13d5a-1ba0-4181-ae7b-69ed90c1793e" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.513789 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kp8f6" Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.556205 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-ld6tp" podUID="440b0b7d-713b-4590-ad35-05fa9d42423a" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.556231 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kp8f6" podUID="6ab13d5a-1ba0-4181-ae7b-69ed90c1793e" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.557249 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-ld6tp" Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.557303 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kp8f6" Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.556283 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-ld6tp" podUID="440b0b7d-713b-4590-ad35-05fa9d42423a" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.557392 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-ld6tp" Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.688205 4520 scope.go:117] "RemoveContainer" containerID="5e0e6d3d22c8852b924c77449e25c4f60aadf93185a67fb78587771f3642aa6b" Jan 30 07:37:41 crc kubenswrapper[4520]: E0130 07:37:41.689183 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.839544 4520 generic.go:334] "Generic (PLEG): container finished" podID="440b0b7d-713b-4590-ad35-05fa9d42423a" containerID="25db906ed138ad8b810f14764be335cf5189012dd21914199e259c7e791b13d8" exitCode=143 Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.839645 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ld6tp" event={"ID":"440b0b7d-713b-4590-ad35-05fa9d42423a","Type":"ContainerDied","Data":"25db906ed138ad8b810f14764be335cf5189012dd21914199e259c7e791b13d8"} Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.859094 4520 generic.go:334] "Generic (PLEG): container finished" podID="b338bd18-f666-4648-9d7f-325d75b9592a" containerID="70dae541a6d8c35f47acf939f10a85da287ff03acc5b8b13552b38765f71cc5d" exitCode=0 Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.859165 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-9n5hv" event={"ID":"b338bd18-f666-4648-9d7f-325d75b9592a","Type":"ContainerDied","Data":"70dae541a6d8c35f47acf939f10a85da287ff03acc5b8b13552b38765f71cc5d"} Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.861606 4520 generic.go:334] "Generic (PLEG): container finished" podID="c8e83470-7d61-4906-9351-b93815bd1c72" containerID="987a6e18038f21370330633d1417c93816846a0ba1c3d1a8cbef615ab23848fc" exitCode=0 Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.861723 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-76c96b8575-pxtsl" event={"ID":"c8e83470-7d61-4906-9351-b93815bd1c72","Type":"ContainerDied","Data":"987a6e18038f21370330633d1417c93816846a0ba1c3d1a8cbef615ab23848fc"} Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.864255 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" event={"ID":"4a3be9f1-bd40-4667-bdd7-2cf23292fab5","Type":"ContainerStarted","Data":"f79b7a36d81f132067efb0b4da6af02d2330f0accd9ee1bb4fb4a776980fe60c"} Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.865003 4520 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-rn9s4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.865066 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" podUID="4a3be9f1-bd40-4667-bdd7-2cf23292fab5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.865464 4520 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" containerID="cri-o://9a92ed05126d510ec5a530553d10693808717b1cc3b29e7303d9aa7976089b5b" Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.865488 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.865604 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr-k8s-webhook-server" containerStatusID={"Type":"cri-o","ID":"9ccdc46d3acff15965d0eafde80630de8e9eaaff5bd00761e0708eea49b1e902"} pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kp8f6" containerMessage="Container frr-k8s-webhook-server failed liveness probe, will be restarted" Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.865681 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kp8f6" podUID="6ab13d5a-1ba0-4181-ae7b-69ed90c1793e" containerName="frr-k8s-webhook-server" containerID="cri-o://9ccdc46d3acff15965d0eafde80630de8e9eaaff5bd00761e0708eea49b1e902" gracePeriod=10 Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.945502 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-fkc22" podUID="ed10ab17-c950-4e94-8c42-f94a51e47083" containerName="registry-server" probeResult="failure" output=< Jan 30 07:37:41 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 07:37:41 crc kubenswrapper[4520]: > Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.946183 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-4k8cc" podUID="bcf48478-d19a-4c05-999a-d0c96c6ddbec" containerName="registry-server" probeResult="failure" output=< Jan 30 07:37:41 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 07:37:41 crc kubenswrapper[4520]: > Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.954570 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="0f6edd3b-e0fe-4d2b-9e68-912425c0128e" containerName="galera" probeResult="failure" output="command timed out" Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.954604 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="0f6edd3b-e0fe-4d2b-9e68-912425c0128e" containerName="galera" probeResult="failure" output="command timed out" Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.954676 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.954698 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Jan 30 07:37:41 crc kubenswrapper[4520]: I0130 07:37:41.955761 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"1080b106b85d3627a546f04d75c8a802b64c642eb66374fd2b6d3ff864941023"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Jan 30 07:37:42 crc kubenswrapper[4520]: I0130 07:37:42.070217 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kp8f6" Jan 30 07:37:42 crc kubenswrapper[4520]: I0130 07:37:42.568368 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x4d5j" podStartSLOduration=14.614084673 podStartE2EDuration="40.567309307s" podCreationTimestamp="2026-01-30 07:37:02 +0000 UTC" firstStartedPulling="2026-01-30 07:37:05.331814527 +0000 UTC m=+3138.960166708" lastFinishedPulling="2026-01-30 07:37:31.285039161 +0000 UTC m=+3164.913391342" observedRunningTime="2026-01-30 07:37:42.553729914 +0000 UTC m=+3176.182082115" watchObservedRunningTime="2026-01-30 07:37:42.567309307 +0000 UTC m=+3176.195661488" Jan 30 07:37:42 crc kubenswrapper[4520]: I0130 07:37:42.606109 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" Jan 30 07:37:42 crc kubenswrapper[4520]: I0130 07:37:42.878651 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-76c96b8575-pxtsl" event={"ID":"c8e83470-7d61-4906-9351-b93815bd1c72","Type":"ContainerStarted","Data":"517c8f431463be352638b3aeb41842a13e63027fdaef23769ba8c9c35f81d4c0"} Jan 30 07:37:42 crc kubenswrapper[4520]: I0130 07:37:42.878956 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-76c96b8575-pxtsl" Jan 30 07:37:42 crc kubenswrapper[4520]: I0130 07:37:42.880724 4520 generic.go:334] "Generic (PLEG): container finished" podID="6ab13d5a-1ba0-4181-ae7b-69ed90c1793e" containerID="9ccdc46d3acff15965d0eafde80630de8e9eaaff5bd00761e0708eea49b1e902" exitCode=0 Jan 30 07:37:42 crc kubenswrapper[4520]: I0130 07:37:42.880807 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kp8f6" event={"ID":"6ab13d5a-1ba0-4181-ae7b-69ed90c1793e","Type":"ContainerDied","Data":"9ccdc46d3acff15965d0eafde80630de8e9eaaff5bd00761e0708eea49b1e902"} Jan 30 07:37:42 crc kubenswrapper[4520]: I0130 07:37:42.886184 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ld6tp" event={"ID":"440b0b7d-713b-4590-ad35-05fa9d42423a","Type":"ContainerStarted","Data":"e4f0040c8846c164b2591c0388d3bb9094c99c7d0bf69b9367241e46d6808f81"} Jan 30 07:37:42 crc kubenswrapper[4520]: I0130 07:37:42.887186 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller" containerStatusID={"Type":"cri-o","ID":"7f4f7eac5e3b47ce2d2e52163c44d18deddb3460b4d48161275af5149bbef8c1"} pod="metallb-system/frr-k8s-ld6tp" containerMessage="Container controller failed liveness probe, will be restarted" Jan 30 07:37:42 crc kubenswrapper[4520]: I0130 07:37:42.887311 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-ld6tp" podUID="440b0b7d-713b-4590-ad35-05fa9d42423a" containerName="controller" containerID="cri-o://7f4f7eac5e3b47ce2d2e52163c44d18deddb3460b4d48161275af5149bbef8c1" gracePeriod=2 Jan 30 07:37:42 crc kubenswrapper[4520]: I0130 07:37:42.889328 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-ld6tp" Jan 30 07:37:42 crc kubenswrapper[4520]: I0130 07:37:42.890921 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-9n5hv" event={"ID":"b338bd18-f666-4648-9d7f-325d75b9592a","Type":"ContainerStarted","Data":"7d937a08cdb0ddf3d428f390f8d5867e5dd2147eb14ddbca0336b96d0c305687"} Jan 30 07:37:42 crc kubenswrapper[4520]: I0130 07:37:42.891075 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-9n5hv" Jan 30 07:37:42 crc kubenswrapper[4520]: I0130 07:37:42.955626 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="0f6edd3b-e0fe-4d2b-9e68-912425c0128e" containerName="galera" probeResult="failure" output="command timed out" Jan 30 07:37:42 crc kubenswrapper[4520]: I0130 07:37:42.989372 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l28fb" podUID="c2d1bf96-9105-4d5d-8dcd-174c098c76d9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:42 crc kubenswrapper[4520]: I0130 07:37:42.989050 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l28fb" podUID="c2d1bf96-9105-4d5d-8dcd-174c098c76d9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:43 crc kubenswrapper[4520]: I0130 07:37:43.040285 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rm676" podUID="80a81bc2-ebfd-4fa9-80ed-ddb70fb32677" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:43 crc kubenswrapper[4520]: I0130 07:37:43.040367 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rm676" podUID="80a81bc2-ebfd-4fa9-80ed-ddb70fb32677" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:43 crc kubenswrapper[4520]: I0130 07:37:43.490655 4520 patch_prober.go:28] interesting pod/console-5698ddd759-pv9lh container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:43 crc kubenswrapper[4520]: I0130 07:37:43.491086 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-5698ddd759-pv9lh" podUID="1117f9de-e43c-4012-8d6d-1d975e62a4cb" containerName="console" probeResult="failure" output="Get \"https://10.217.0.32:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:43 crc kubenswrapper[4520]: I0130 07:37:43.752701 4520 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-dqjws container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:43 crc kubenswrapper[4520]: I0130 07:37:43.752945 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-dqjws" podUID="22d49062-540d-414e-b0c6-2c20d411fa71" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:43 crc kubenswrapper[4520]: I0130 07:37:43.753001 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-69f744f599-dqjws" Jan 30 07:37:43 crc kubenswrapper[4520]: I0130 07:37:43.753483 4520 patch_prober.go:28] interesting pod/console-operator-58897d9998-w7xl2 container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:43 crc kubenswrapper[4520]: I0130 07:37:43.753567 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-w7xl2" podUID="23b08d0a-4aa5-43be-a498-55e54d6e8c31" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:43 crc kubenswrapper[4520]: I0130 07:37:43.753625 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-58897d9998-w7xl2" Jan 30 07:37:43 crc kubenswrapper[4520]: I0130 07:37:43.753746 4520 patch_prober.go:28] interesting pod/console-operator-58897d9998-w7xl2 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:43 crc kubenswrapper[4520]: I0130 07:37:43.753788 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-w7xl2" podUID="23b08d0a-4aa5-43be-a498-55e54d6e8c31" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:43 crc kubenswrapper[4520]: I0130 07:37:43.753797 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"ba69a6990fa6e19ab27b958a5d3beb06a49879a3abc4ad5364b14731faa4ac91"} pod="openshift-authentication-operator/authentication-operator-69f744f599-dqjws" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Jan 30 07:37:43 crc kubenswrapper[4520]: I0130 07:37:43.753830 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-69f744f599-dqjws" podUID="22d49062-540d-414e-b0c6-2c20d411fa71" containerName="authentication-operator" containerID="cri-o://ba69a6990fa6e19ab27b958a5d3beb06a49879a3abc4ad5364b14731faa4ac91" gracePeriod=30 Jan 30 07:37:43 crc kubenswrapper[4520]: I0130 07:37:43.753850 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-w7xl2" Jan 30 07:37:43 crc kubenswrapper[4520]: I0130 07:37:43.754460 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console-operator" containerStatusID={"Type":"cri-o","ID":"730c7d86939b8b22ada65f588ab575155a69e61ec1dcaabe2668edc0c804436a"} pod="openshift-console-operator/console-operator-58897d9998-w7xl2" containerMessage="Container console-operator failed liveness probe, will be restarted" Jan 30 07:37:43 crc kubenswrapper[4520]: I0130 07:37:43.754499 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console-operator/console-operator-58897d9998-w7xl2" podUID="23b08d0a-4aa5-43be-a498-55e54d6e8c31" containerName="console-operator" containerID="cri-o://730c7d86939b8b22ada65f588ab575155a69e61ec1dcaabe2668edc0c804436a" gracePeriod=30 Jan 30 07:37:43 crc kubenswrapper[4520]: I0130 07:37:43.904823 4520 generic.go:334] "Generic (PLEG): container finished" podID="440b0b7d-713b-4590-ad35-05fa9d42423a" containerID="7f4f7eac5e3b47ce2d2e52163c44d18deddb3460b4d48161275af5149bbef8c1" exitCode=0 Jan 30 07:37:43 crc kubenswrapper[4520]: I0130 07:37:43.904890 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ld6tp" event={"ID":"440b0b7d-713b-4590-ad35-05fa9d42423a","Type":"ContainerDied","Data":"7f4f7eac5e3b47ce2d2e52163c44d18deddb3460b4d48161275af5149bbef8c1"} Jan 30 07:37:43 crc kubenswrapper[4520]: I0130 07:37:43.909588 4520 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="20f365e319337b1d1c71d80b5631c2264c907a4b8c06d78c1e1c2ed64915fdfb" exitCode=0 Jan 30 07:37:43 crc kubenswrapper[4520]: I0130 07:37:43.909724 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"20f365e319337b1d1c71d80b5631c2264c907a4b8c06d78c1e1c2ed64915fdfb"} Jan 30 07:37:43 crc kubenswrapper[4520]: I0130 07:37:43.957224 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa" containerName="galera" probeResult="failure" output="command timed out" Jan 30 07:37:43 crc kubenswrapper[4520]: I0130 07:37:43.957319 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 30 07:37:43 crc kubenswrapper[4520]: I0130 07:37:43.957925 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa" containerName="galera" probeResult="failure" output="command timed out" Jan 30 07:37:43 crc kubenswrapper[4520]: I0130 07:37:43.957980 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 30 07:37:43 crc kubenswrapper[4520]: I0130 07:37:43.958719 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"ccc9ebeeb596cde108546303c05cee534a9ca8c66903c4e71b64ac83a3aaaf4b"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Jan 30 07:37:44 crc kubenswrapper[4520]: I0130 07:37:44.526128 4520 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-kcrth container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": context deadline exceeded" start-of-body= Jan 30 07:37:44 crc kubenswrapper[4520]: I0130 07:37:44.526420 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" podUID="86dea262-c989-43a8-ae6e-e744012a5e07" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": context deadline exceeded" Jan 30 07:37:44 crc kubenswrapper[4520]: I0130 07:37:44.526776 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" Jan 30 07:37:44 crc kubenswrapper[4520]: I0130 07:37:44.526991 4520 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-kcrth container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:44 crc kubenswrapper[4520]: I0130 07:37:44.527012 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" podUID="86dea262-c989-43a8-ae6e-e744012a5e07" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:44 crc kubenswrapper[4520]: I0130 07:37:44.527035 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" Jan 30 07:37:44 crc kubenswrapper[4520]: I0130 07:37:44.527905 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="packageserver" containerStatusID={"Type":"cri-o","ID":"2a8dc7f17ef0190cbdf74fc24740afad70d3fa6a4f2eaaa8158a9e5aa4797021"} pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" containerMessage="Container packageserver failed liveness probe, will be restarted" Jan 30 07:37:44 crc kubenswrapper[4520]: I0130 07:37:44.527942 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" podUID="86dea262-c989-43a8-ae6e-e744012a5e07" containerName="packageserver" containerID="cri-o://2a8dc7f17ef0190cbdf74fc24740afad70d3fa6a4f2eaaa8158a9e5aa4797021" gracePeriod=30 Jan 30 07:37:44 crc kubenswrapper[4520]: I0130 07:37:44.612661 4520 patch_prober.go:28] interesting pod/router-default-5444994796-z67kf container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:44 crc kubenswrapper[4520]: I0130 07:37:44.612695 4520 patch_prober.go:28] interesting pod/router-default-5444994796-z67kf container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:44 crc kubenswrapper[4520]: I0130 07:37:44.612718 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-z67kf" podUID="a7229bd1-5891-4654-ad14-c0efed77e9b7" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:44 crc kubenswrapper[4520]: I0130 07:37:44.612759 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-z67kf" podUID="a7229bd1-5891-4654-ad14-c0efed77e9b7" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:44 crc kubenswrapper[4520]: I0130 07:37:44.612814 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-z67kf" Jan 30 07:37:44 crc kubenswrapper[4520]: I0130 07:37:44.612840 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-ingress/router-default-5444994796-z67kf" Jan 30 07:37:44 crc kubenswrapper[4520]: I0130 07:37:44.613984 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"d290a10da9fd5e01b8337c522c5e6d92740e66c7432a53ceab083329faa1bf64"} pod="openshift-ingress/router-default-5444994796-z67kf" containerMessage="Container router failed liveness probe, will be restarted" Jan 30 07:37:44 crc kubenswrapper[4520]: I0130 07:37:44.614028 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-5444994796-z67kf" podUID="a7229bd1-5891-4654-ad14-c0efed77e9b7" containerName="router" containerID="cri-o://d290a10da9fd5e01b8337c522c5e6d92740e66c7432a53ceab083329faa1bf64" gracePeriod=10 Jan 30 07:37:44 crc kubenswrapper[4520]: I0130 07:37:44.638535 4520 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": dial tcp 192.168.126.11:10259: connect: connection refused" start-of-body= Jan 30 07:37:44 crc kubenswrapper[4520]: I0130 07:37:44.638590 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": dial tcp 192.168.126.11:10259: connect: connection refused" Jan 30 07:37:44 crc kubenswrapper[4520]: I0130 07:37:44.638669 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 07:37:44 crc kubenswrapper[4520]: I0130 07:37:44.860729 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x4d5j" podUID="61cc98f1-a66d-488c-a076-914ada7e8de1" containerName="registry-server" probeResult="failure" output=< Jan 30 07:37:44 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 07:37:44 crc kubenswrapper[4520]: > Jan 30 07:37:44 crc kubenswrapper[4520]: I0130 07:37:44.920323 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"85b53a52e4281e1f5b445160733ace0003cc72b755a96c37a6ad0b2eaee1a32b"} Jan 30 07:37:44 crc kubenswrapper[4520]: I0130 07:37:44.920671 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 07:37:44 crc kubenswrapper[4520]: I0130 07:37:44.923218 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kp8f6" event={"ID":"6ab13d5a-1ba0-4181-ae7b-69ed90c1793e","Type":"ContainerStarted","Data":"720511efc556d124f8e51c386177ee2948976956e1b7c987ed6868c0fc0ab26b"} Jan 30 07:37:44 crc kubenswrapper[4520]: I0130 07:37:44.923746 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kp8f6" podUID="6ab13d5a-1ba0-4181-ae7b-69ed90c1793e" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7572/metrics\": dial tcp 10.217.0.47:7572: connect: connection refused" Jan 30 07:37:44 crc kubenswrapper[4520]: I0130 07:37:44.928260 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ld6tp" event={"ID":"440b0b7d-713b-4590-ad35-05fa9d42423a","Type":"ContainerStarted","Data":"2bc599b13118556d5a396de9d57d6edee43dfda9ceda409810a713bc3ec6fc2c"} Jan 30 07:37:44 crc kubenswrapper[4520]: I0130 07:37:44.928395 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-ld6tp" Jan 30 07:37:44 crc kubenswrapper[4520]: I0130 07:37:44.932706 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-w7xl2_23b08d0a-4aa5-43be-a498-55e54d6e8c31/console-operator/0.log" Jan 30 07:37:44 crc kubenswrapper[4520]: I0130 07:37:44.932966 4520 generic.go:334] "Generic (PLEG): container finished" podID="23b08d0a-4aa5-43be-a498-55e54d6e8c31" containerID="730c7d86939b8b22ada65f588ab575155a69e61ec1dcaabe2668edc0c804436a" exitCode=1 Jan 30 07:37:44 crc kubenswrapper[4520]: I0130 07:37:44.932992 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-w7xl2" event={"ID":"23b08d0a-4aa5-43be-a498-55e54d6e8c31","Type":"ContainerDied","Data":"730c7d86939b8b22ada65f588ab575155a69e61ec1dcaabe2668edc0c804436a"} Jan 30 07:37:44 crc kubenswrapper[4520]: I0130 07:37:44.955554 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa" containerName="galera" probeResult="failure" output="command timed out" Jan 30 07:37:44 crc kubenswrapper[4520]: I0130 07:37:44.959710 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="9df01147-3505-4e88-b91c-671e2149ab19" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 30 07:37:45 crc kubenswrapper[4520]: I0130 07:37:45.040208 4520 patch_prober.go:28] interesting pod/router-default-5444994796-z67kf container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]backend-http ok Jan 30 07:37:45 crc kubenswrapper[4520]: [+]has-synced ok Jan 30 07:37:45 crc kubenswrapper[4520]: [-]process-running failed: reason withheld Jan 30 07:37:45 crc kubenswrapper[4520]: healthz check failed Jan 30 07:37:45 crc kubenswrapper[4520]: I0130 07:37:45.040295 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-z67kf" podUID="a7229bd1-5891-4654-ad14-c0efed77e9b7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 07:37:45 crc kubenswrapper[4520]: I0130 07:37:45.241727 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" Jan 30 07:37:45 crc kubenswrapper[4520]: I0130 07:37:45.349619 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-ld6tp" Jan 30 07:37:45 crc kubenswrapper[4520]: I0130 07:37:45.610404 4520 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-rn9s4 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 30 07:37:45 crc kubenswrapper[4520]: I0130 07:37:45.610459 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" podUID="4a3be9f1-bd40-4667-bdd7-2cf23292fab5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 30 07:37:45 crc kubenswrapper[4520]: I0130 07:37:45.610411 4520 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-rn9s4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 30 07:37:45 crc kubenswrapper[4520]: I0130 07:37:45.610690 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" podUID="4a3be9f1-bd40-4667-bdd7-2cf23292fab5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 30 07:37:45 crc kubenswrapper[4520]: I0130 07:37:45.665798 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-ld6tp" Jan 30 07:37:45 crc kubenswrapper[4520]: I0130 07:37:45.943176 4520 generic.go:334] "Generic (PLEG): container finished" podID="22d49062-540d-414e-b0c6-2c20d411fa71" containerID="ba69a6990fa6e19ab27b958a5d3beb06a49879a3abc4ad5364b14731faa4ac91" exitCode=0 Jan 30 07:37:45 crc kubenswrapper[4520]: I0130 07:37:45.944579 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-dqjws" event={"ID":"22d49062-540d-414e-b0c6-2c20d411fa71","Type":"ContainerDied","Data":"ba69a6990fa6e19ab27b958a5d3beb06a49879a3abc4ad5364b14731faa4ac91"} Jan 30 07:37:45 crc kubenswrapper[4520]: I0130 07:37:45.947638 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-4r4pj" podUID="928273b1-c655-46cb-860d-584378c92f40" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.72:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:45 crc kubenswrapper[4520]: I0130 07:37:45.948807 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-w7xl2_23b08d0a-4aa5-43be-a498-55e54d6e8c31/console-operator/0.log" Jan 30 07:37:45 crc kubenswrapper[4520]: I0130 07:37:45.948977 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-w7xl2" event={"ID":"23b08d0a-4aa5-43be-a498-55e54d6e8c31","Type":"ContainerStarted","Data":"f5f7a7258116d048fc26cf52c2e235ae600c64ee803c81640ee33f2c31b206c3"} Jan 30 07:37:45 crc kubenswrapper[4520]: I0130 07:37:45.949709 4520 patch_prober.go:28] interesting pod/console-operator-58897d9998-w7xl2 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/readyz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Jan 30 07:37:45 crc kubenswrapper[4520]: I0130 07:37:45.949765 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-w7xl2" podUID="23b08d0a-4aa5-43be-a498-55e54d6e8c31" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/readyz\": dial tcp 10.217.0.19:8443: connect: connection refused" Jan 30 07:37:45 crc kubenswrapper[4520]: I0130 07:37:45.950168 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-w7xl2" Jan 30 07:37:45 crc kubenswrapper[4520]: I0130 07:37:45.950378 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kp8f6" Jan 30 07:37:46 crc kubenswrapper[4520]: I0130 07:37:46.826639 4520 patch_prober.go:28] interesting pod/route-controller-manager-864b9b6b9d-wjphz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:46 crc kubenswrapper[4520]: I0130 07:37:46.826989 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-864b9b6b9d-wjphz" podUID="29f74dba-e0dc-4507-9bb9-97664a2839c9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:46 crc kubenswrapper[4520]: I0130 07:37:46.867649 4520 patch_prober.go:28] interesting pod/route-controller-manager-864b9b6b9d-wjphz container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:46 crc kubenswrapper[4520]: I0130 07:37:46.867712 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-864b9b6b9d-wjphz" podUID="29f74dba-e0dc-4507-9bb9-97664a2839c9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:46 crc kubenswrapper[4520]: I0130 07:37:46.867763 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-864b9b6b9d-wjphz" Jan 30 07:37:46 crc kubenswrapper[4520]: I0130 07:37:46.868500 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="route-controller-manager" containerStatusID={"Type":"cri-o","ID":"5b235c69892db9cf627451db8a66076d5b83d0a68e734d8cb086f5cebc831a1b"} pod="openshift-route-controller-manager/route-controller-manager-864b9b6b9d-wjphz" containerMessage="Container route-controller-manager failed liveness probe, will be restarted" Jan 30 07:37:46 crc kubenswrapper[4520]: I0130 07:37:46.868561 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-864b9b6b9d-wjphz" podUID="29f74dba-e0dc-4507-9bb9-97664a2839c9" containerName="route-controller-manager" containerID="cri-o://5b235c69892db9cf627451db8a66076d5b83d0a68e734d8cb086f5cebc831a1b" gracePeriod=30 Jan 30 07:37:46 crc kubenswrapper[4520]: I0130 07:37:46.958289 4520 generic.go:334] "Generic (PLEG): container finished" podID="86dea262-c989-43a8-ae6e-e744012a5e07" containerID="2a8dc7f17ef0190cbdf74fc24740afad70d3fa6a4f2eaaa8158a9e5aa4797021" exitCode=0 Jan 30 07:37:46 crc kubenswrapper[4520]: I0130 07:37:46.958345 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" event={"ID":"86dea262-c989-43a8-ae6e-e744012a5e07","Type":"ContainerDied","Data":"2a8dc7f17ef0190cbdf74fc24740afad70d3fa6a4f2eaaa8158a9e5aa4797021"} Jan 30 07:37:46 crc kubenswrapper[4520]: I0130 07:37:46.958371 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" event={"ID":"86dea262-c989-43a8-ae6e-e744012a5e07","Type":"ContainerStarted","Data":"b8e6ce60d879d1c80cfd9354d992c8dfcc24cfb24eccede8144aa8753c75b236"} Jan 30 07:37:46 crc kubenswrapper[4520]: I0130 07:37:46.959320 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" Jan 30 07:37:46 crc kubenswrapper[4520]: I0130 07:37:46.959381 4520 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-kcrth container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" start-of-body= Jan 30 07:37:46 crc kubenswrapper[4520]: I0130 07:37:46.959405 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" podUID="86dea262-c989-43a8-ae6e-e744012a5e07" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" Jan 30 07:37:46 crc kubenswrapper[4520]: I0130 07:37:46.962032 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-dqjws" event={"ID":"22d49062-540d-414e-b0c6-2c20d411fa71","Type":"ContainerStarted","Data":"7be8aecc1a90e9b82ff997f650ff199ab3fa40bd7b308867fef3fe0ebcea67c5"} Jan 30 07:37:46 crc kubenswrapper[4520]: I0130 07:37:46.962352 4520 patch_prober.go:28] interesting pod/console-operator-58897d9998-w7xl2 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/readyz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Jan 30 07:37:46 crc kubenswrapper[4520]: I0130 07:37:46.962385 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-w7xl2" podUID="23b08d0a-4aa5-43be-a498-55e54d6e8c31" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/readyz\": dial tcp 10.217.0.19:8443: connect: connection refused" Jan 30 07:37:47 crc kubenswrapper[4520]: I0130 07:37:47.976972 4520 generic.go:334] "Generic (PLEG): container finished" podID="29f74dba-e0dc-4507-9bb9-97664a2839c9" containerID="5b235c69892db9cf627451db8a66076d5b83d0a68e734d8cb086f5cebc831a1b" exitCode=0 Jan 30 07:37:47 crc kubenswrapper[4520]: I0130 07:37:47.977079 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-864b9b6b9d-wjphz" event={"ID":"29f74dba-e0dc-4507-9bb9-97664a2839c9","Type":"ContainerDied","Data":"5b235c69892db9cf627451db8a66076d5b83d0a68e734d8cb086f5cebc831a1b"} Jan 30 07:37:47 crc kubenswrapper[4520]: I0130 07:37:47.978646 4520 patch_prober.go:28] interesting pod/console-operator-58897d9998-w7xl2 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/readyz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Jan 30 07:37:47 crc kubenswrapper[4520]: I0130 07:37:47.978689 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-w7xl2" podUID="23b08d0a-4aa5-43be-a498-55e54d6e8c31" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/readyz\": dial tcp 10.217.0.19:8443: connect: connection refused" Jan 30 07:37:47 crc kubenswrapper[4520]: I0130 07:37:47.978779 4520 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-kcrth container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" start-of-body= Jan 30 07:37:47 crc kubenswrapper[4520]: I0130 07:37:47.978856 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" podUID="86dea262-c989-43a8-ae6e-e744012a5e07" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" Jan 30 07:37:48 crc kubenswrapper[4520]: I0130 07:37:48.354927 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa" containerName="galera" containerID="cri-o://ccc9ebeeb596cde108546303c05cee534a9ca8c66903c4e71b64ac83a3aaaf4b" gracePeriod=26 Jan 30 07:37:48 crc kubenswrapper[4520]: I0130 07:37:48.356279 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="0f6edd3b-e0fe-4d2b-9e68-912425c0128e" containerName="galera" containerID="cri-o://1080b106b85d3627a546f04d75c8a802b64c642eb66374fd2b6d3ff864941023" gracePeriod=24 Jan 30 07:37:48 crc kubenswrapper[4520]: I0130 07:37:48.606763 4520 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-rn9s4 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 30 07:37:48 crc kubenswrapper[4520]: I0130 07:37:48.606821 4520 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-rn9s4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 30 07:37:48 crc kubenswrapper[4520]: I0130 07:37:48.606822 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" podUID="4a3be9f1-bd40-4667-bdd7-2cf23292fab5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 30 07:37:48 crc kubenswrapper[4520]: I0130 07:37:48.606850 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" podUID="4a3be9f1-bd40-4667-bdd7-2cf23292fab5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 30 07:37:48 crc kubenswrapper[4520]: I0130 07:37:48.986679 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-864b9b6b9d-wjphz" event={"ID":"29f74dba-e0dc-4507-9bb9-97664a2839c9","Type":"ContainerStarted","Data":"d8024b7620a79580b56fc2e79c7f1cec19f2456401cd6e45d2e89eecba0788b0"} Jan 30 07:37:48 crc kubenswrapper[4520]: I0130 07:37:48.986913 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-864b9b6b9d-wjphz" Jan 30 07:37:48 crc kubenswrapper[4520]: I0130 07:37:48.987753 4520 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-kcrth container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" start-of-body= Jan 30 07:37:48 crc kubenswrapper[4520]: I0130 07:37:48.987782 4520 patch_prober.go:28] interesting pod/route-controller-manager-864b9b6b9d-wjphz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" start-of-body= Jan 30 07:37:48 crc kubenswrapper[4520]: I0130 07:37:48.987806 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" podUID="86dea262-c989-43a8-ae6e-e744012a5e07" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" Jan 30 07:37:48 crc kubenswrapper[4520]: I0130 07:37:48.987864 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-864b9b6b9d-wjphz" podUID="29f74dba-e0dc-4507-9bb9-97664a2839c9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" Jan 30 07:37:49 crc kubenswrapper[4520]: I0130 07:37:49.439127 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-cxn6m" podUID="c2f02050-fdee-42d1-87c0-74104b2aa6bc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:49 crc kubenswrapper[4520]: I0130 07:37:49.469646 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="0ab985e5-7e52-4438-9d9b-fd6f2e4f4175" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 07:37:49 crc kubenswrapper[4520]: I0130 07:37:49.812059 4520 patch_prober.go:28] interesting pod/oauth-openshift-6686467b65-4qb7w container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.61:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 07:37:49 crc kubenswrapper[4520]: I0130 07:37:49.812111 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" podUID="97fba751-b99c-4b44-9ffd-06e6e7344680" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.61:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 07:37:49 crc kubenswrapper[4520]: I0130 07:37:49.993424 4520 patch_prober.go:28] interesting pod/route-controller-manager-864b9b6b9d-wjphz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" start-of-body= Jan 30 07:37:49 crc kubenswrapper[4520]: I0130 07:37:49.994277 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-864b9b6b9d-wjphz" podUID="29f74dba-e0dc-4507-9bb9-97664a2839c9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" Jan 30 07:37:50 crc kubenswrapper[4520]: I0130 07:37:50.963058 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="9df01147-3505-4e88-b91c-671e2149ab19" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 30 07:37:50 crc kubenswrapper[4520]: I0130 07:37:50.963567 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Jan 30 07:37:50 crc kubenswrapper[4520]: I0130 07:37:50.966782 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"0739919db0e42ab2d21e594a295adc079dbd11ac4f42597ed8b5b399d87d6ee4"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Jan 30 07:37:50 crc kubenswrapper[4520]: I0130 07:37:50.966910 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9df01147-3505-4e88-b91c-671e2149ab19" containerName="ceilometer-central-agent" containerID="cri-o://0739919db0e42ab2d21e594a295adc079dbd11ac4f42597ed8b5b399d87d6ee4" gracePeriod=30 Jan 30 07:37:51 crc kubenswrapper[4520]: E0130 07:37:51.122545 4520 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1080b106b85d3627a546f04d75c8a802b64c642eb66374fd2b6d3ff864941023" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 30 07:37:51 crc kubenswrapper[4520]: E0130 07:37:51.124130 4520 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1080b106b85d3627a546f04d75c8a802b64c642eb66374fd2b6d3ff864941023" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 30 07:37:51 crc kubenswrapper[4520]: E0130 07:37:51.125386 4520 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1080b106b85d3627a546f04d75c8a802b64c642eb66374fd2b6d3ff864941023" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 30 07:37:51 crc kubenswrapper[4520]: E0130 07:37:51.125499 4520 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="0f6edd3b-e0fe-4d2b-9e68-912425c0128e" containerName="galera" Jan 30 07:37:51 crc kubenswrapper[4520]: I0130 07:37:51.452108 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="0ab985e5-7e52-4438-9d9b-fd6f2e4f4175" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 07:37:51 crc kubenswrapper[4520]: I0130 07:37:51.458603 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-fkc22" podUID="ed10ab17-c950-4e94-8c42-f94a51e47083" containerName="registry-server" probeResult="failure" output=< Jan 30 07:37:51 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 07:37:51 crc kubenswrapper[4520]: > Jan 30 07:37:51 crc kubenswrapper[4520]: I0130 07:37:51.527480 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-4k8cc" podUID="bcf48478-d19a-4c05-999a-d0c96c6ddbec" containerName="registry-server" probeResult="failure" output=< Jan 30 07:37:51 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 07:37:51 crc kubenswrapper[4520]: > Jan 30 07:37:51 crc kubenswrapper[4520]: I0130 07:37:51.606143 4520 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-rn9s4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 30 07:37:51 crc kubenswrapper[4520]: I0130 07:37:51.606189 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" podUID="4a3be9f1-bd40-4667-bdd7-2cf23292fab5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 30 07:37:51 crc kubenswrapper[4520]: I0130 07:37:51.606361 4520 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-rn9s4 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 30 07:37:51 crc kubenswrapper[4520]: I0130 07:37:51.606404 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" podUID="4a3be9f1-bd40-4667-bdd7-2cf23292fab5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 30 07:37:51 crc kubenswrapper[4520]: I0130 07:37:51.606446 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" Jan 30 07:37:51 crc kubenswrapper[4520]: I0130 07:37:51.606989 4520 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-rn9s4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 30 07:37:51 crc kubenswrapper[4520]: I0130 07:37:51.607038 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" podUID="4a3be9f1-bd40-4667-bdd7-2cf23292fab5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 30 07:37:51 crc kubenswrapper[4520]: I0130 07:37:51.607744 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"f79b7a36d81f132067efb0b4da6af02d2330f0accd9ee1bb4fb4a776980fe60c"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Jan 30 07:37:51 crc kubenswrapper[4520]: I0130 07:37:51.607836 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" podUID="4a3be9f1-bd40-4667-bdd7-2cf23292fab5" containerName="openshift-config-operator" containerID="cri-o://f79b7a36d81f132067efb0b4da6af02d2330f0accd9ee1bb4fb4a776980fe60c" gracePeriod=30 Jan 30 07:37:52 crc kubenswrapper[4520]: E0130 07:37:52.667781 4520 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ccc9ebeeb596cde108546303c05cee534a9ca8c66903c4e71b64ac83a3aaaf4b is running failed: container process not found" containerID="ccc9ebeeb596cde108546303c05cee534a9ca8c66903c4e71b64ac83a3aaaf4b" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 30 07:37:52 crc kubenswrapper[4520]: E0130 07:37:52.668570 4520 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ccc9ebeeb596cde108546303c05cee534a9ca8c66903c4e71b64ac83a3aaaf4b is running failed: container process not found" containerID="ccc9ebeeb596cde108546303c05cee534a9ca8c66903c4e71b64ac83a3aaaf4b" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 30 07:37:52 crc kubenswrapper[4520]: E0130 07:37:52.668877 4520 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ccc9ebeeb596cde108546303c05cee534a9ca8c66903c4e71b64ac83a3aaaf4b is running failed: container process not found" containerID="ccc9ebeeb596cde108546303c05cee534a9ca8c66903c4e71b64ac83a3aaaf4b" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 30 07:37:52 crc kubenswrapper[4520]: E0130 07:37:52.668912 4520 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ccc9ebeeb596cde108546303c05cee534a9ca8c66903c4e71b64ac83a3aaaf4b is running failed: container process not found" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa" containerName="galera" Jan 30 07:37:52 crc kubenswrapper[4520]: I0130 07:37:52.850250 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-w7xl2" Jan 30 07:37:53 crc kubenswrapper[4520]: I0130 07:37:53.028842 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa","Type":"ContainerDied","Data":"ccc9ebeeb596cde108546303c05cee534a9ca8c66903c4e71b64ac83a3aaaf4b"} Jan 30 07:37:53 crc kubenswrapper[4520]: I0130 07:37:53.029059 4520 generic.go:334] "Generic (PLEG): container finished" podID="4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa" containerID="ccc9ebeeb596cde108546303c05cee534a9ca8c66903c4e71b64ac83a3aaaf4b" exitCode=0 Jan 30 07:37:53 crc kubenswrapper[4520]: I0130 07:37:53.031132 4520 generic.go:334] "Generic (PLEG): container finished" podID="9df01147-3505-4e88-b91c-671e2149ab19" containerID="0739919db0e42ab2d21e594a295adc079dbd11ac4f42597ed8b5b399d87d6ee4" exitCode=0 Jan 30 07:37:53 crc kubenswrapper[4520]: I0130 07:37:53.031169 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9df01147-3505-4e88-b91c-671e2149ab19","Type":"ContainerDied","Data":"0739919db0e42ab2d21e594a295adc079dbd11ac4f42597ed8b5b399d87d6ee4"} Jan 30 07:37:53 crc kubenswrapper[4520]: I0130 07:37:53.032923 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-rn9s4_4a3be9f1-bd40-4667-bdd7-2cf23292fab5/openshift-config-operator/1.log" Jan 30 07:37:53 crc kubenswrapper[4520]: I0130 07:37:53.035150 4520 generic.go:334] "Generic (PLEG): container finished" podID="4a3be9f1-bd40-4667-bdd7-2cf23292fab5" containerID="f79b7a36d81f132067efb0b4da6af02d2330f0accd9ee1bb4fb4a776980fe60c" exitCode=255 Jan 30 07:37:53 crc kubenswrapper[4520]: I0130 07:37:53.035179 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" event={"ID":"4a3be9f1-bd40-4667-bdd7-2cf23292fab5","Type":"ContainerDied","Data":"f79b7a36d81f132067efb0b4da6af02d2330f0accd9ee1bb4fb4a776980fe60c"} Jan 30 07:37:53 crc kubenswrapper[4520]: I0130 07:37:53.035251 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" event={"ID":"4a3be9f1-bd40-4667-bdd7-2cf23292fab5","Type":"ContainerStarted","Data":"d88869a7a1f4d7d4ad8cf39cb3f3502cad04deb6f1fca70fffb4db4a0f7705d9"} Jan 30 07:37:53 crc kubenswrapper[4520]: I0130 07:37:53.035551 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" Jan 30 07:37:53 crc kubenswrapper[4520]: I0130 07:37:53.036886 4520 scope.go:117] "RemoveContainer" containerID="9a92ed05126d510ec5a530553d10693808717b1cc3b29e7303d9aa7976089b5b" Jan 30 07:37:53 crc kubenswrapper[4520]: I0130 07:37:53.530655 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kcrth" Jan 30 07:37:53 crc kubenswrapper[4520]: I0130 07:37:53.768104 4520 patch_prober.go:28] interesting pod/router-default-5444994796-z67kf container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]backend-http ok Jan 30 07:37:53 crc kubenswrapper[4520]: [+]has-synced ok Jan 30 07:37:53 crc kubenswrapper[4520]: [-]process-running failed: reason withheld Jan 30 07:37:53 crc kubenswrapper[4520]: healthz check failed Jan 30 07:37:53 crc kubenswrapper[4520]: I0130 07:37:53.768154 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-z67kf" podUID="a7229bd1-5891-4654-ad14-c0efed77e9b7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 07:37:54 crc kubenswrapper[4520]: I0130 07:37:54.072398 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-rn9s4_4a3be9f1-bd40-4667-bdd7-2cf23292fab5/openshift-config-operator/1.log" Jan 30 07:37:54 crc kubenswrapper[4520]: I0130 07:37:54.110445 4520 generic.go:334] "Generic (PLEG): container finished" podID="0f6edd3b-e0fe-4d2b-9e68-912425c0128e" containerID="1080b106b85d3627a546f04d75c8a802b64c642eb66374fd2b6d3ff864941023" exitCode=0 Jan 30 07:37:54 crc kubenswrapper[4520]: I0130 07:37:54.110585 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0f6edd3b-e0fe-4d2b-9e68-912425c0128e","Type":"ContainerDied","Data":"1080b106b85d3627a546f04d75c8a802b64c642eb66374fd2b6d3ff864941023"} Jan 30 07:37:54 crc kubenswrapper[4520]: I0130 07:37:54.116331 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4f4bfe6a-89ec-4e2d-8961-6c9c3a9c64fa","Type":"ContainerStarted","Data":"a0b6d4de0112b6cefab1ca708ca01bc94b0835453e6d08f9af4a295f254953b2"} Jan 30 07:37:54 crc kubenswrapper[4520]: I0130 07:37:54.140427 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9df01147-3505-4e88-b91c-671e2149ab19","Type":"ContainerStarted","Data":"c797e54de8f94f7563f521d0d23ccf8a9d1bb6a59c76866e0cb38ddd804574be"} Jan 30 07:37:54 crc kubenswrapper[4520]: I0130 07:37:54.243559 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x4d5j" podUID="61cc98f1-a66d-488c-a076-914ada7e8de1" containerName="registry-server" probeResult="failure" output=< Jan 30 07:37:54 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 07:37:54 crc kubenswrapper[4520]: > Jan 30 07:37:54 crc kubenswrapper[4520]: I0130 07:37:54.426463 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="0ab985e5-7e52-4438-9d9b-fd6f2e4f4175" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 07:37:54 crc kubenswrapper[4520]: I0130 07:37:54.426559 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 07:37:54 crc kubenswrapper[4520]: I0130 07:37:54.427407 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"c4663a9d667d51593ed37c045a3bcba0b11fb3e8d15f7d9d413e2be4a5d9f1e2"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Jan 30 07:37:54 crc kubenswrapper[4520]: I0130 07:37:54.427463 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0ab985e5-7e52-4438-9d9b-fd6f2e4f4175" containerName="cinder-scheduler" containerID="cri-o://c4663a9d667d51593ed37c045a3bcba0b11fb3e8d15f7d9d413e2be4a5d9f1e2" gracePeriod=30 Jan 30 07:37:54 crc kubenswrapper[4520]: E0130 07:37:54.660834 4520 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.25.87:40700->192.168.25.87:39417: write tcp 192.168.25.87:40700->192.168.25.87:39417: write: connection reset by peer Jan 30 07:37:54 crc kubenswrapper[4520]: I0130 07:37:54.965024 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:37:55 crc kubenswrapper[4520]: I0130 07:37:55.153335 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-z67kf_a7229bd1-5891-4654-ad14-c0efed77e9b7/router/0.log" Jan 30 07:37:55 crc kubenswrapper[4520]: I0130 07:37:55.153376 4520 generic.go:334] "Generic (PLEG): container finished" podID="a7229bd1-5891-4654-ad14-c0efed77e9b7" containerID="d290a10da9fd5e01b8337c522c5e6d92740e66c7432a53ceab083329faa1bf64" exitCode=137 Jan 30 07:37:55 crc kubenswrapper[4520]: I0130 07:37:55.153441 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-z67kf" event={"ID":"a7229bd1-5891-4654-ad14-c0efed77e9b7","Type":"ContainerDied","Data":"d290a10da9fd5e01b8337c522c5e6d92740e66c7432a53ceab083329faa1bf64"} Jan 30 07:37:55 crc kubenswrapper[4520]: I0130 07:37:55.161939 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0f6edd3b-e0fe-4d2b-9e68-912425c0128e","Type":"ContainerStarted","Data":"298bf62adc03d6d09d7e61826367c6efd89ad6162b3206f7f5c4da959e0c15f6"} Jan 30 07:37:55 crc kubenswrapper[4520]: I0130 07:37:55.685767 4520 scope.go:117] "RemoveContainer" containerID="5e0e6d3d22c8852b924c77449e25c4f60aadf93185a67fb78587771f3642aa6b" Jan 30 07:37:55 crc kubenswrapper[4520]: E0130 07:37:55.686613 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:37:55 crc kubenswrapper[4520]: I0130 07:37:55.799158 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-864b9b6b9d-wjphz" Jan 30 07:37:56 crc kubenswrapper[4520]: I0130 07:37:56.172131 4520 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-z67kf_a7229bd1-5891-4654-ad14-c0efed77e9b7/router/0.log" Jan 30 07:37:56 crc kubenswrapper[4520]: I0130 07:37:56.173246 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-z67kf" event={"ID":"a7229bd1-5891-4654-ad14-c0efed77e9b7","Type":"ContainerStarted","Data":"321e7cec0849b833c0733427b768b0d956e105d777c88fa630982c3009b12fde"} Jan 30 07:37:56 crc kubenswrapper[4520]: I0130 07:37:56.173453 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9df01147-3505-4e88-b91c-671e2149ab19" containerName="ceilometer-notification-agent" containerID="cri-o://5f65b0709cbc49f21ab500e35c601379fbeed5bf2d95a64736a3a046c3ffaf9c" gracePeriod=30 Jan 30 07:37:56 crc kubenswrapper[4520]: I0130 07:37:56.173509 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9df01147-3505-4e88-b91c-671e2149ab19" containerName="ceilometer-central-agent" containerID="cri-o://c797e54de8f94f7563f521d0d23ccf8a9d1bb6a59c76866e0cb38ddd804574be" gracePeriod=30 Jan 30 07:37:56 crc kubenswrapper[4520]: I0130 07:37:56.173568 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9df01147-3505-4e88-b91c-671e2149ab19" containerName="sg-core" containerID="cri-o://6a0eab6d2a46fa88f690d128a4a5ad7fe06e2be80d9292edbe570783e8d3a999" gracePeriod=30 Jan 30 07:37:56 crc kubenswrapper[4520]: I0130 07:37:56.173510 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9df01147-3505-4e88-b91c-671e2149ab19" containerName="proxy-httpd" containerID="cri-o://dd73d25d370ca14503c1034bad7c9cd70882e221992943d2f672c1265130f65f" gracePeriod=30 Jan 30 07:37:56 crc kubenswrapper[4520]: I0130 07:37:56.531126 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-z67kf" Jan 30 07:37:56 crc kubenswrapper[4520]: I0130 07:37:56.537568 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-z67kf" Jan 30 07:37:57 crc kubenswrapper[4520]: I0130 07:37:57.182274 4520 generic.go:334] "Generic (PLEG): container finished" podID="9df01147-3505-4e88-b91c-671e2149ab19" containerID="dd73d25d370ca14503c1034bad7c9cd70882e221992943d2f672c1265130f65f" exitCode=0 Jan 30 07:37:57 crc kubenswrapper[4520]: I0130 07:37:57.182808 4520 generic.go:334] "Generic (PLEG): container finished" podID="9df01147-3505-4e88-b91c-671e2149ab19" containerID="6a0eab6d2a46fa88f690d128a4a5ad7fe06e2be80d9292edbe570783e8d3a999" exitCode=2 Jan 30 07:37:57 crc kubenswrapper[4520]: I0130 07:37:57.182356 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9df01147-3505-4e88-b91c-671e2149ab19","Type":"ContainerDied","Data":"dd73d25d370ca14503c1034bad7c9cd70882e221992943d2f672c1265130f65f"} Jan 30 07:37:57 crc kubenswrapper[4520]: I0130 07:37:57.182916 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9df01147-3505-4e88-b91c-671e2149ab19","Type":"ContainerDied","Data":"6a0eab6d2a46fa88f690d128a4a5ad7fe06e2be80d9292edbe570783e8d3a999"} Jan 30 07:37:57 crc kubenswrapper[4520]: I0130 07:37:57.183081 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-z67kf" Jan 30 07:37:57 crc kubenswrapper[4520]: I0130 07:37:57.186680 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-z67kf" Jan 30 07:37:57 crc kubenswrapper[4520]: I0130 07:37:57.621626 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rn9s4" Jan 30 07:37:58 crc kubenswrapper[4520]: I0130 07:37:58.192376 4520 generic.go:334] "Generic (PLEG): container finished" podID="0ab985e5-7e52-4438-9d9b-fd6f2e4f4175" containerID="c4663a9d667d51593ed37c045a3bcba0b11fb3e8d15f7d9d413e2be4a5d9f1e2" exitCode=0 Jan 30 07:37:58 crc kubenswrapper[4520]: I0130 07:37:58.193385 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0ab985e5-7e52-4438-9d9b-fd6f2e4f4175","Type":"ContainerDied","Data":"c4663a9d667d51593ed37c045a3bcba0b11fb3e8d15f7d9d413e2be4a5d9f1e2"} Jan 30 07:37:58 crc kubenswrapper[4520]: I0130 07:37:58.423082 4520 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="9df01147-3505-4e88-b91c-671e2149ab19" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.220:3000/\": dial tcp 10.217.0.220:3000: connect: connection refused" Jan 30 07:37:58 crc kubenswrapper[4520]: I0130 07:37:58.787489 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 07:37:59 crc kubenswrapper[4520]: I0130 07:37:59.269436 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-76c96b8575-pxtsl" Jan 30 07:37:59 crc kubenswrapper[4520]: I0130 07:37:59.869303 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-9n5hv" Jan 30 07:38:00 crc kubenswrapper[4520]: I0130 07:38:00.210155 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0ab985e5-7e52-4438-9d9b-fd6f2e4f4175","Type":"ContainerStarted","Data":"38fbc250e717f8605799a6850f44a97c37eba1ace502ba26a992b377c6668c17"} Jan 30 07:38:00 crc kubenswrapper[4520]: I0130 07:38:00.440820 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-ld6tp" Jan 30 07:38:00 crc kubenswrapper[4520]: I0130 07:38:00.604093 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kp8f6" Jan 30 07:38:00 crc kubenswrapper[4520]: I0130 07:38:00.766379 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-fkc22" podUID="ed10ab17-c950-4e94-8c42-f94a51e47083" containerName="registry-server" probeResult="failure" output=< Jan 30 07:38:00 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 07:38:00 crc kubenswrapper[4520]: > Jan 30 07:38:01 crc kubenswrapper[4520]: I0130 07:38:01.115902 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 30 07:38:01 crc kubenswrapper[4520]: I0130 07:38:01.115953 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 30 07:38:01 crc kubenswrapper[4520]: I0130 07:38:01.226106 4520 generic.go:334] "Generic (PLEG): container finished" podID="9df01147-3505-4e88-b91c-671e2149ab19" containerID="5f65b0709cbc49f21ab500e35c601379fbeed5bf2d95a64736a3a046c3ffaf9c" exitCode=0 Jan 30 07:38:01 crc kubenswrapper[4520]: I0130 07:38:01.226823 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9df01147-3505-4e88-b91c-671e2149ab19","Type":"ContainerDied","Data":"5f65b0709cbc49f21ab500e35c601379fbeed5bf2d95a64736a3a046c3ffaf9c"} Jan 30 07:38:01 crc kubenswrapper[4520]: I0130 07:38:01.257749 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 30 07:38:01 crc kubenswrapper[4520]: I0130 07:38:01.383829 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 30 07:38:01 crc kubenswrapper[4520]: I0130 07:38:01.555959 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-4k8cc" podUID="bcf48478-d19a-4c05-999a-d0c96c6ddbec" containerName="registry-server" probeResult="failure" output=< Jan 30 07:38:01 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 07:38:01 crc kubenswrapper[4520]: > Jan 30 07:38:02 crc kubenswrapper[4520]: I0130 07:38:02.411727 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 07:38:02 crc kubenswrapper[4520]: I0130 07:38:02.667193 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 30 07:38:02 crc kubenswrapper[4520]: I0130 07:38:02.667440 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 30 07:38:02 crc kubenswrapper[4520]: I0130 07:38:02.768783 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 30 07:38:03 crc kubenswrapper[4520]: I0130 07:38:03.243991 4520 generic.go:334] "Generic (PLEG): container finished" podID="8ee881d4-3f07-49b1-8444-b15c5b868b9e" containerID="de36bf21a16f7f9a539415d5c073f92fba78998bc011ffeefba3ff90c571029c" exitCode=1 Jan 30 07:38:03 crc kubenswrapper[4520]: I0130 07:38:03.244079 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"8ee881d4-3f07-49b1-8444-b15c5b868b9e","Type":"ContainerDied","Data":"de36bf21a16f7f9a539415d5c073f92fba78998bc011ffeefba3ff90c571029c"} Jan 30 07:38:03 crc kubenswrapper[4520]: I0130 07:38:03.339238 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 30 07:38:04 crc kubenswrapper[4520]: I0130 07:38:04.132543 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x4d5j" podUID="61cc98f1-a66d-488c-a076-914ada7e8de1" containerName="registry-server" probeResult="failure" output=< Jan 30 07:38:04 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 07:38:04 crc kubenswrapper[4520]: > Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.169345 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.258629 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Jan 30 07:38:05 crc kubenswrapper[4520]: E0130 07:38:05.260187 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ee881d4-3f07-49b1-8444-b15c5b868b9e" containerName="tempest-tests-tempest-tests-runner" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.260269 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ee881d4-3f07-49b1-8444-b15c5b868b9e" containerName="tempest-tests-tempest-tests-runner" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.260471 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ee881d4-3f07-49b1-8444-b15c5b868b9e" containerName="tempest-tests-tempest-tests-runner" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.262958 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.264750 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"8ee881d4-3f07-49b1-8444-b15c5b868b9e","Type":"ContainerDied","Data":"2b9ffcb34695814d706fac6ab9558ab60ebbfe392522811988d30b560ef171c9"} Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.264909 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.265298 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b9ffcb34695814d706fac6ab9558ab60ebbfe392522811988d30b560ef171c9" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.268431 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s1" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.268436 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s1" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.274471 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8ee881d4-3f07-49b1-8444-b15c5b868b9e-config-data\") pod \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\" (UID: \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\") " Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.274662 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\" (UID: \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\") " Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.274767 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/8ee881d4-3f07-49b1-8444-b15c5b868b9e-test-operator-ephemeral-temporary\") pod \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\" (UID: \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\") " Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.274919 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8ee881d4-3f07-49b1-8444-b15c5b868b9e-openstack-config-secret\") pod \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\" (UID: \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\") " Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.274937 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/8ee881d4-3f07-49b1-8444-b15c5b868b9e-test-operator-ephemeral-workdir\") pod \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\" (UID: \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\") " Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.275056 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8ee881d4-3f07-49b1-8444-b15c5b868b9e-ssh-key\") pod \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\" (UID: \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\") " Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.275083 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/8ee881d4-3f07-49b1-8444-b15c5b868b9e-ca-certs\") pod \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\" (UID: \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\") " Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.275107 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl5jw\" (UniqueName: \"kubernetes.io/projected/8ee881d4-3f07-49b1-8444-b15c5b868b9e-kube-api-access-jl5jw\") pod \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\" (UID: \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\") " Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.275136 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8ee881d4-3f07-49b1-8444-b15c5b868b9e-openstack-config\") pod \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\" (UID: \"8ee881d4-3f07-49b1-8444-b15c5b868b9e\") " Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.275742 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ee881d4-3f07-49b1-8444-b15c5b868b9e-config-data" (OuterVolumeSpecName: "config-data") pod "8ee881d4-3f07-49b1-8444-b15c5b868b9e" (UID: "8ee881d4-3f07-49b1-8444-b15c5b868b9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.276566 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ee881d4-3f07-49b1-8444-b15c5b868b9e-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "8ee881d4-3f07-49b1-8444-b15c5b868b9e" (UID: "8ee881d4-3f07-49b1-8444-b15c5b868b9e"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.302887 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "test-operator-logs") pod "8ee881d4-3f07-49b1-8444-b15c5b868b9e" (UID: "8ee881d4-3f07-49b1-8444-b15c5b868b9e"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.312304 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.324747 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ee881d4-3f07-49b1-8444-b15c5b868b9e-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "8ee881d4-3f07-49b1-8444-b15c5b868b9e" (UID: "8ee881d4-3f07-49b1-8444-b15c5b868b9e"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.325837 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ee881d4-3f07-49b1-8444-b15c5b868b9e-kube-api-access-jl5jw" (OuterVolumeSpecName: "kube-api-access-jl5jw") pod "8ee881d4-3f07-49b1-8444-b15c5b868b9e" (UID: "8ee881d4-3f07-49b1-8444-b15c5b868b9e"). InnerVolumeSpecName "kube-api-access-jl5jw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.333943 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ee881d4-3f07-49b1-8444-b15c5b868b9e-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "8ee881d4-3f07-49b1-8444-b15c5b868b9e" (UID: "8ee881d4-3f07-49b1-8444-b15c5b868b9e"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.342243 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ee881d4-3f07-49b1-8444-b15c5b868b9e-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "8ee881d4-3f07-49b1-8444-b15c5b868b9e" (UID: "8ee881d4-3f07-49b1-8444-b15c5b868b9e"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.377951 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ee881d4-3f07-49b1-8444-b15c5b868b9e-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "8ee881d4-3f07-49b1-8444-b15c5b868b9e" (UID: "8ee881d4-3f07-49b1-8444-b15c5b868b9e"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.384132 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ee881d4-3f07-49b1-8444-b15c5b868b9e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8ee881d4-3f07-49b1-8444-b15c5b868b9e" (UID: "8ee881d4-3f07-49b1-8444-b15c5b868b9e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.391224 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/68266a47-8812-40f3-bd46-d1ee8d55def1-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"68266a47-8812-40f3-bd46-d1ee8d55def1\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.391774 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpl4b\" (UniqueName: \"kubernetes.io/projected/68266a47-8812-40f3-bd46-d1ee8d55def1-kube-api-access-xpl4b\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"68266a47-8812-40f3-bd46-d1ee8d55def1\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.392077 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/68266a47-8812-40f3-bd46-d1ee8d55def1-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"68266a47-8812-40f3-bd46-d1ee8d55def1\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.392198 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/68266a47-8812-40f3-bd46-d1ee8d55def1-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"68266a47-8812-40f3-bd46-d1ee8d55def1\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.392314 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"68266a47-8812-40f3-bd46-d1ee8d55def1\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.393137 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/68266a47-8812-40f3-bd46-d1ee8d55def1-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"68266a47-8812-40f3-bd46-d1ee8d55def1\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.393305 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/68266a47-8812-40f3-bd46-d1ee8d55def1-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"68266a47-8812-40f3-bd46-d1ee8d55def1\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.393458 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/68266a47-8812-40f3-bd46-d1ee8d55def1-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"68266a47-8812-40f3-bd46-d1ee8d55def1\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.393580 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/68266a47-8812-40f3-bd46-d1ee8d55def1-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"68266a47-8812-40f3-bd46-d1ee8d55def1\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.393764 4520 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8ee881d4-3f07-49b1-8444-b15c5b868b9e-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.393825 4520 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/8ee881d4-3f07-49b1-8444-b15c5b868b9e-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.393877 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jl5jw\" (UniqueName: \"kubernetes.io/projected/8ee881d4-3f07-49b1-8444-b15c5b868b9e-kube-api-access-jl5jw\") on node \"crc\" DevicePath \"\"" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.394111 4520 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8ee881d4-3f07-49b1-8444-b15c5b868b9e-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.394239 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8ee881d4-3f07-49b1-8444-b15c5b868b9e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.394296 4520 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/8ee881d4-3f07-49b1-8444-b15c5b868b9e-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.394344 4520 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8ee881d4-3f07-49b1-8444-b15c5b868b9e-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.394491 4520 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/8ee881d4-3f07-49b1-8444-b15c5b868b9e-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.426206 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"68266a47-8812-40f3-bd46-d1ee8d55def1\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.497869 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/68266a47-8812-40f3-bd46-d1ee8d55def1-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"68266a47-8812-40f3-bd46-d1ee8d55def1\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.497967 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/68266a47-8812-40f3-bd46-d1ee8d55def1-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"68266a47-8812-40f3-bd46-d1ee8d55def1\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.498019 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/68266a47-8812-40f3-bd46-d1ee8d55def1-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"68266a47-8812-40f3-bd46-d1ee8d55def1\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.498112 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/68266a47-8812-40f3-bd46-d1ee8d55def1-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"68266a47-8812-40f3-bd46-d1ee8d55def1\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.498162 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/68266a47-8812-40f3-bd46-d1ee8d55def1-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"68266a47-8812-40f3-bd46-d1ee8d55def1\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.498210 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/68266a47-8812-40f3-bd46-d1ee8d55def1-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"68266a47-8812-40f3-bd46-d1ee8d55def1\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.498260 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/68266a47-8812-40f3-bd46-d1ee8d55def1-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"68266a47-8812-40f3-bd46-d1ee8d55def1\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.498303 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpl4b\" (UniqueName: \"kubernetes.io/projected/68266a47-8812-40f3-bd46-d1ee8d55def1-kube-api-access-xpl4b\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"68266a47-8812-40f3-bd46-d1ee8d55def1\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.498932 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/68266a47-8812-40f3-bd46-d1ee8d55def1-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"68266a47-8812-40f3-bd46-d1ee8d55def1\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.499311 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/68266a47-8812-40f3-bd46-d1ee8d55def1-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"68266a47-8812-40f3-bd46-d1ee8d55def1\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.499329 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/68266a47-8812-40f3-bd46-d1ee8d55def1-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"68266a47-8812-40f3-bd46-d1ee8d55def1\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.499610 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/68266a47-8812-40f3-bd46-d1ee8d55def1-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"68266a47-8812-40f3-bd46-d1ee8d55def1\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.501582 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/68266a47-8812-40f3-bd46-d1ee8d55def1-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"68266a47-8812-40f3-bd46-d1ee8d55def1\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.502114 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/68266a47-8812-40f3-bd46-d1ee8d55def1-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"68266a47-8812-40f3-bd46-d1ee8d55def1\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.502928 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/68266a47-8812-40f3-bd46-d1ee8d55def1-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"68266a47-8812-40f3-bd46-d1ee8d55def1\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.516197 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpl4b\" (UniqueName: \"kubernetes.io/projected/68266a47-8812-40f3-bd46-d1ee8d55def1-kube-api-access-xpl4b\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"68266a47-8812-40f3-bd46-d1ee8d55def1\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.578679 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 07:38:05 crc kubenswrapper[4520]: I0130 07:38:05.840221 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" podUID="97fba751-b99c-4b44-9ffd-06e6e7344680" containerName="oauth-openshift" containerID="cri-o://c2ee130e090a1059087b7cef46dc305bc7d33cf7086869d981652ca11e954d88" gracePeriod=14 Jan 30 07:38:06 crc kubenswrapper[4520]: I0130 07:38:06.197094 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Jan 30 07:38:06 crc kubenswrapper[4520]: I0130 07:38:06.288496 4520 generic.go:334] "Generic (PLEG): container finished" podID="97fba751-b99c-4b44-9ffd-06e6e7344680" containerID="c2ee130e090a1059087b7cef46dc305bc7d33cf7086869d981652ca11e954d88" exitCode=0 Jan 30 07:38:06 crc kubenswrapper[4520]: I0130 07:38:06.288622 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" event={"ID":"97fba751-b99c-4b44-9ffd-06e6e7344680","Type":"ContainerDied","Data":"c2ee130e090a1059087b7cef46dc305bc7d33cf7086869d981652ca11e954d88"} Jan 30 07:38:06 crc kubenswrapper[4520]: I0130 07:38:06.298315 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"68266a47-8812-40f3-bd46-d1ee8d55def1","Type":"ContainerStarted","Data":"a2ac20f40da2cb4739530cc65beca67c26dd9143de4dbe03a346882e7989d7ff"} Jan 30 07:38:07 crc kubenswrapper[4520]: I0130 07:38:07.308943 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" event={"ID":"97fba751-b99c-4b44-9ffd-06e6e7344680","Type":"ContainerStarted","Data":"b670ba8f277cc0fd09f4e45b0a9a137ab102b551f6a3502169b86684639bbef2"} Jan 30 07:38:07 crc kubenswrapper[4520]: I0130 07:38:07.309478 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 07:38:07 crc kubenswrapper[4520]: I0130 07:38:07.493958 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 07:38:07 crc kubenswrapper[4520]: I0130 07:38:07.676028 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6686467b65-4qb7w" Jan 30 07:38:09 crc kubenswrapper[4520]: I0130 07:38:09.326615 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"68266a47-8812-40f3-bd46-d1ee8d55def1","Type":"ContainerStarted","Data":"bc080649353fb05e55ea0f671358532e59ff76f49f05f60501c06e43a7a2d68b"} Jan 30 07:38:09 crc kubenswrapper[4520]: I0130 07:38:09.359004 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" podStartSLOduration=4.358982868 podStartE2EDuration="4.358982868s" podCreationTimestamp="2026-01-30 07:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:38:09.352209362 +0000 UTC m=+3202.980561533" watchObservedRunningTime="2026-01-30 07:38:09.358982868 +0000 UTC m=+3202.987335049" Jan 30 07:38:09 crc kubenswrapper[4520]: I0130 07:38:09.685625 4520 scope.go:117] "RemoveContainer" containerID="5e0e6d3d22c8852b924c77449e25c4f60aadf93185a67fb78587771f3642aa6b" Jan 30 07:38:09 crc kubenswrapper[4520]: E0130 07:38:09.686261 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:38:09 crc kubenswrapper[4520]: I0130 07:38:09.773586 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fkc22" Jan 30 07:38:09 crc kubenswrapper[4520]: I0130 07:38:09.814762 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fkc22" Jan 30 07:38:10 crc kubenswrapper[4520]: I0130 07:38:10.525593 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4k8cc" Jan 30 07:38:10 crc kubenswrapper[4520]: I0130 07:38:10.568913 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4k8cc" Jan 30 07:38:10 crc kubenswrapper[4520]: I0130 07:38:10.984265 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4k8cc"] Jan 30 07:38:11 crc kubenswrapper[4520]: I0130 07:38:11.358212 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7gxfs"] Jan 30 07:38:11 crc kubenswrapper[4520]: I0130 07:38:11.358440 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7gxfs" podUID="ea00fc18-fa83-4a0b-afbb-1faba49e4385" containerName="registry-server" containerID="cri-o://9c797d4e315a12deb3e57c9c20a89fdc79fc4acda58b68bcfbb3d2c6905e44f5" gracePeriod=2 Jan 30 07:38:11 crc kubenswrapper[4520]: I0130 07:38:11.557474 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fkc22"] Jan 30 07:38:11 crc kubenswrapper[4520]: I0130 07:38:11.557991 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fkc22" podUID="ed10ab17-c950-4e94-8c42-f94a51e47083" containerName="registry-server" containerID="cri-o://d70002b5e920db3ed124e95b3aa0d122b204dc1f642d30e1d98996df39bc6ff1" gracePeriod=2 Jan 30 07:38:11 crc kubenswrapper[4520]: I0130 07:38:11.939074 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7gxfs" Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.048034 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea00fc18-fa83-4a0b-afbb-1faba49e4385-utilities\") pod \"ea00fc18-fa83-4a0b-afbb-1faba49e4385\" (UID: \"ea00fc18-fa83-4a0b-afbb-1faba49e4385\") " Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.048120 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72dvr\" (UniqueName: \"kubernetes.io/projected/ea00fc18-fa83-4a0b-afbb-1faba49e4385-kube-api-access-72dvr\") pod \"ea00fc18-fa83-4a0b-afbb-1faba49e4385\" (UID: \"ea00fc18-fa83-4a0b-afbb-1faba49e4385\") " Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.048169 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea00fc18-fa83-4a0b-afbb-1faba49e4385-catalog-content\") pod \"ea00fc18-fa83-4a0b-afbb-1faba49e4385\" (UID: \"ea00fc18-fa83-4a0b-afbb-1faba49e4385\") " Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.049857 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea00fc18-fa83-4a0b-afbb-1faba49e4385-utilities" (OuterVolumeSpecName: "utilities") pod "ea00fc18-fa83-4a0b-afbb-1faba49e4385" (UID: "ea00fc18-fa83-4a0b-afbb-1faba49e4385"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.073290 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea00fc18-fa83-4a0b-afbb-1faba49e4385-kube-api-access-72dvr" (OuterVolumeSpecName: "kube-api-access-72dvr") pod "ea00fc18-fa83-4a0b-afbb-1faba49e4385" (UID: "ea00fc18-fa83-4a0b-afbb-1faba49e4385"). InnerVolumeSpecName "kube-api-access-72dvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.115173 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fkc22" Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.126281 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea00fc18-fa83-4a0b-afbb-1faba49e4385-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea00fc18-fa83-4a0b-afbb-1faba49e4385" (UID: "ea00fc18-fa83-4a0b-afbb-1faba49e4385"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.149475 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed10ab17-c950-4e94-8c42-f94a51e47083-catalog-content\") pod \"ed10ab17-c950-4e94-8c42-f94a51e47083\" (UID: \"ed10ab17-c950-4e94-8c42-f94a51e47083\") " Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.149593 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpgl7\" (UniqueName: \"kubernetes.io/projected/ed10ab17-c950-4e94-8c42-f94a51e47083-kube-api-access-xpgl7\") pod \"ed10ab17-c950-4e94-8c42-f94a51e47083\" (UID: \"ed10ab17-c950-4e94-8c42-f94a51e47083\") " Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.149666 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed10ab17-c950-4e94-8c42-f94a51e47083-utilities\") pod \"ed10ab17-c950-4e94-8c42-f94a51e47083\" (UID: \"ed10ab17-c950-4e94-8c42-f94a51e47083\") " Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.150170 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea00fc18-fa83-4a0b-afbb-1faba49e4385-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.150188 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72dvr\" (UniqueName: \"kubernetes.io/projected/ea00fc18-fa83-4a0b-afbb-1faba49e4385-kube-api-access-72dvr\") on node \"crc\" DevicePath \"\"" Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.150199 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea00fc18-fa83-4a0b-afbb-1faba49e4385-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.150529 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed10ab17-c950-4e94-8c42-f94a51e47083-utilities" (OuterVolumeSpecName: "utilities") pod "ed10ab17-c950-4e94-8c42-f94a51e47083" (UID: "ed10ab17-c950-4e94-8c42-f94a51e47083"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.155437 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed10ab17-c950-4e94-8c42-f94a51e47083-kube-api-access-xpgl7" (OuterVolumeSpecName: "kube-api-access-xpgl7") pod "ed10ab17-c950-4e94-8c42-f94a51e47083" (UID: "ed10ab17-c950-4e94-8c42-f94a51e47083"). InnerVolumeSpecName "kube-api-access-xpgl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.165490 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed10ab17-c950-4e94-8c42-f94a51e47083-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ed10ab17-c950-4e94-8c42-f94a51e47083" (UID: "ed10ab17-c950-4e94-8c42-f94a51e47083"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.252717 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed10ab17-c950-4e94-8c42-f94a51e47083-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.253037 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpgl7\" (UniqueName: \"kubernetes.io/projected/ed10ab17-c950-4e94-8c42-f94a51e47083-kube-api-access-xpgl7\") on node \"crc\" DevicePath \"\"" Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.253085 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed10ab17-c950-4e94-8c42-f94a51e47083-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.352594 4520 generic.go:334] "Generic (PLEG): container finished" podID="ea00fc18-fa83-4a0b-afbb-1faba49e4385" containerID="9c797d4e315a12deb3e57c9c20a89fdc79fc4acda58b68bcfbb3d2c6905e44f5" exitCode=0 Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.352667 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gxfs" event={"ID":"ea00fc18-fa83-4a0b-afbb-1faba49e4385","Type":"ContainerDied","Data":"9c797d4e315a12deb3e57c9c20a89fdc79fc4acda58b68bcfbb3d2c6905e44f5"} Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.352698 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gxfs" event={"ID":"ea00fc18-fa83-4a0b-afbb-1faba49e4385","Type":"ContainerDied","Data":"7e98a04a05a9fddf121d22e64b9214acbf258effa6fbb00b7595de0267ba3087"} Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.352715 4520 scope.go:117] "RemoveContainer" containerID="9c797d4e315a12deb3e57c9c20a89fdc79fc4acda58b68bcfbb3d2c6905e44f5" Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.352854 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7gxfs" Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.367846 4520 generic.go:334] "Generic (PLEG): container finished" podID="ed10ab17-c950-4e94-8c42-f94a51e47083" containerID="d70002b5e920db3ed124e95b3aa0d122b204dc1f642d30e1d98996df39bc6ff1" exitCode=0 Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.368842 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fkc22" event={"ID":"ed10ab17-c950-4e94-8c42-f94a51e47083","Type":"ContainerDied","Data":"d70002b5e920db3ed124e95b3aa0d122b204dc1f642d30e1d98996df39bc6ff1"} Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.368885 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fkc22" event={"ID":"ed10ab17-c950-4e94-8c42-f94a51e47083","Type":"ContainerDied","Data":"7fb337208a1077ae5237dc4f2f09a3e7b1eb22138de30bf665703577261d440f"} Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.368960 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fkc22" Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.409587 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7gxfs"] Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.432580 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7gxfs"] Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.446238 4520 scope.go:117] "RemoveContainer" containerID="921f248f749c773b89da298b517a6d9f1b56be72c623b717ca88ef4d798bbab0" Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.446392 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fkc22"] Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.458706 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fkc22"] Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.542686 4520 scope.go:117] "RemoveContainer" containerID="b72909095959aafc78799849a88d3d9f22a1bb2fc96eddc35a351c3961a5ab8f" Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.586661 4520 scope.go:117] "RemoveContainer" containerID="9c797d4e315a12deb3e57c9c20a89fdc79fc4acda58b68bcfbb3d2c6905e44f5" Jan 30 07:38:12 crc kubenswrapper[4520]: E0130 07:38:12.587585 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c797d4e315a12deb3e57c9c20a89fdc79fc4acda58b68bcfbb3d2c6905e44f5\": container with ID starting with 9c797d4e315a12deb3e57c9c20a89fdc79fc4acda58b68bcfbb3d2c6905e44f5 not found: ID does not exist" containerID="9c797d4e315a12deb3e57c9c20a89fdc79fc4acda58b68bcfbb3d2c6905e44f5" Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.587625 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c797d4e315a12deb3e57c9c20a89fdc79fc4acda58b68bcfbb3d2c6905e44f5"} err="failed to get container status \"9c797d4e315a12deb3e57c9c20a89fdc79fc4acda58b68bcfbb3d2c6905e44f5\": rpc error: code = NotFound desc = could not find container \"9c797d4e315a12deb3e57c9c20a89fdc79fc4acda58b68bcfbb3d2c6905e44f5\": container with ID starting with 9c797d4e315a12deb3e57c9c20a89fdc79fc4acda58b68bcfbb3d2c6905e44f5 not found: ID does not exist" Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.587654 4520 scope.go:117] "RemoveContainer" containerID="921f248f749c773b89da298b517a6d9f1b56be72c623b717ca88ef4d798bbab0" Jan 30 07:38:12 crc kubenswrapper[4520]: E0130 07:38:12.587973 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"921f248f749c773b89da298b517a6d9f1b56be72c623b717ca88ef4d798bbab0\": container with ID starting with 921f248f749c773b89da298b517a6d9f1b56be72c623b717ca88ef4d798bbab0 not found: ID does not exist" containerID="921f248f749c773b89da298b517a6d9f1b56be72c623b717ca88ef4d798bbab0" Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.588016 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"921f248f749c773b89da298b517a6d9f1b56be72c623b717ca88ef4d798bbab0"} err="failed to get container status \"921f248f749c773b89da298b517a6d9f1b56be72c623b717ca88ef4d798bbab0\": rpc error: code = NotFound desc = could not find container \"921f248f749c773b89da298b517a6d9f1b56be72c623b717ca88ef4d798bbab0\": container with ID starting with 921f248f749c773b89da298b517a6d9f1b56be72c623b717ca88ef4d798bbab0 not found: ID does not exist" Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.588047 4520 scope.go:117] "RemoveContainer" containerID="b72909095959aafc78799849a88d3d9f22a1bb2fc96eddc35a351c3961a5ab8f" Jan 30 07:38:12 crc kubenswrapper[4520]: E0130 07:38:12.588377 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b72909095959aafc78799849a88d3d9f22a1bb2fc96eddc35a351c3961a5ab8f\": container with ID starting with b72909095959aafc78799849a88d3d9f22a1bb2fc96eddc35a351c3961a5ab8f not found: ID does not exist" containerID="b72909095959aafc78799849a88d3d9f22a1bb2fc96eddc35a351c3961a5ab8f" Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.588406 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b72909095959aafc78799849a88d3d9f22a1bb2fc96eddc35a351c3961a5ab8f"} err="failed to get container status \"b72909095959aafc78799849a88d3d9f22a1bb2fc96eddc35a351c3961a5ab8f\": rpc error: code = NotFound desc = could not find container \"b72909095959aafc78799849a88d3d9f22a1bb2fc96eddc35a351c3961a5ab8f\": container with ID starting with b72909095959aafc78799849a88d3d9f22a1bb2fc96eddc35a351c3961a5ab8f not found: ID does not exist" Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.588424 4520 scope.go:117] "RemoveContainer" containerID="d70002b5e920db3ed124e95b3aa0d122b204dc1f642d30e1d98996df39bc6ff1" Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.645682 4520 scope.go:117] "RemoveContainer" containerID="76b1262a53770ab043c15c57c445e7422c14ffecfa60b5971fd6ac5f7941759b" Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.690837 4520 scope.go:117] "RemoveContainer" containerID="bfd4f5f0dd80e5e00b3fa8ba04dc954f9bc85f5451a0b709e2130f8b053559e6" Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.708929 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea00fc18-fa83-4a0b-afbb-1faba49e4385" path="/var/lib/kubelet/pods/ea00fc18-fa83-4a0b-afbb-1faba49e4385/volumes" Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.709540 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed10ab17-c950-4e94-8c42-f94a51e47083" path="/var/lib/kubelet/pods/ed10ab17-c950-4e94-8c42-f94a51e47083/volumes" Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.723295 4520 scope.go:117] "RemoveContainer" containerID="d70002b5e920db3ed124e95b3aa0d122b204dc1f642d30e1d98996df39bc6ff1" Jan 30 07:38:12 crc kubenswrapper[4520]: E0130 07:38:12.727061 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d70002b5e920db3ed124e95b3aa0d122b204dc1f642d30e1d98996df39bc6ff1\": container with ID starting with d70002b5e920db3ed124e95b3aa0d122b204dc1f642d30e1d98996df39bc6ff1 not found: ID does not exist" containerID="d70002b5e920db3ed124e95b3aa0d122b204dc1f642d30e1d98996df39bc6ff1" Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.727113 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d70002b5e920db3ed124e95b3aa0d122b204dc1f642d30e1d98996df39bc6ff1"} err="failed to get container status \"d70002b5e920db3ed124e95b3aa0d122b204dc1f642d30e1d98996df39bc6ff1\": rpc error: code = NotFound desc = could not find container \"d70002b5e920db3ed124e95b3aa0d122b204dc1f642d30e1d98996df39bc6ff1\": container with ID starting with d70002b5e920db3ed124e95b3aa0d122b204dc1f642d30e1d98996df39bc6ff1 not found: ID does not exist" Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.727149 4520 scope.go:117] "RemoveContainer" containerID="76b1262a53770ab043c15c57c445e7422c14ffecfa60b5971fd6ac5f7941759b" Jan 30 07:38:12 crc kubenswrapper[4520]: E0130 07:38:12.727498 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76b1262a53770ab043c15c57c445e7422c14ffecfa60b5971fd6ac5f7941759b\": container with ID starting with 76b1262a53770ab043c15c57c445e7422c14ffecfa60b5971fd6ac5f7941759b not found: ID does not exist" containerID="76b1262a53770ab043c15c57c445e7422c14ffecfa60b5971fd6ac5f7941759b" Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.727550 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76b1262a53770ab043c15c57c445e7422c14ffecfa60b5971fd6ac5f7941759b"} err="failed to get container status \"76b1262a53770ab043c15c57c445e7422c14ffecfa60b5971fd6ac5f7941759b\": rpc error: code = NotFound desc = could not find container \"76b1262a53770ab043c15c57c445e7422c14ffecfa60b5971fd6ac5f7941759b\": container with ID starting with 76b1262a53770ab043c15c57c445e7422c14ffecfa60b5971fd6ac5f7941759b not found: ID does not exist" Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.727573 4520 scope.go:117] "RemoveContainer" containerID="bfd4f5f0dd80e5e00b3fa8ba04dc954f9bc85f5451a0b709e2130f8b053559e6" Jan 30 07:38:12 crc kubenswrapper[4520]: E0130 07:38:12.728132 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfd4f5f0dd80e5e00b3fa8ba04dc954f9bc85f5451a0b709e2130f8b053559e6\": container with ID starting with bfd4f5f0dd80e5e00b3fa8ba04dc954f9bc85f5451a0b709e2130f8b053559e6 not found: ID does not exist" containerID="bfd4f5f0dd80e5e00b3fa8ba04dc954f9bc85f5451a0b709e2130f8b053559e6" Jan 30 07:38:12 crc kubenswrapper[4520]: I0130 07:38:12.728179 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfd4f5f0dd80e5e00b3fa8ba04dc954f9bc85f5451a0b709e2130f8b053559e6"} err="failed to get container status \"bfd4f5f0dd80e5e00b3fa8ba04dc954f9bc85f5451a0b709e2130f8b053559e6\": rpc error: code = NotFound desc = could not find container \"bfd4f5f0dd80e5e00b3fa8ba04dc954f9bc85f5451a0b709e2130f8b053559e6\": container with ID starting with bfd4f5f0dd80e5e00b3fa8ba04dc954f9bc85f5451a0b709e2130f8b053559e6 not found: ID does not exist" Jan 30 07:38:14 crc kubenswrapper[4520]: I0130 07:38:14.171407 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x4d5j" podUID="61cc98f1-a66d-488c-a076-914ada7e8de1" containerName="registry-server" probeResult="failure" output=< Jan 30 07:38:14 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 07:38:14 crc kubenswrapper[4520]: > Jan 30 07:38:22 crc kubenswrapper[4520]: I0130 07:38:22.690180 4520 scope.go:117] "RemoveContainer" containerID="5e0e6d3d22c8852b924c77449e25c4f60aadf93185a67fb78587771f3642aa6b" Jan 30 07:38:22 crc kubenswrapper[4520]: E0130 07:38:22.691185 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:38:23 crc kubenswrapper[4520]: I0130 07:38:23.132892 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x4d5j" Jan 30 07:38:23 crc kubenswrapper[4520]: I0130 07:38:23.177665 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x4d5j" Jan 30 07:38:23 crc kubenswrapper[4520]: I0130 07:38:23.369154 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x4d5j"] Jan 30 07:38:24 crc kubenswrapper[4520]: I0130 07:38:24.480508 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x4d5j" podUID="61cc98f1-a66d-488c-a076-914ada7e8de1" containerName="registry-server" containerID="cri-o://d03f0ddb9c8e4d017f583b0bb490822f4a0a16f046cb6e323287f9723e71446f" gracePeriod=2 Jan 30 07:38:24 crc kubenswrapper[4520]: I0130 07:38:24.992114 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4d5j" Jan 30 07:38:25 crc kubenswrapper[4520]: I0130 07:38:25.042372 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61cc98f1-a66d-488c-a076-914ada7e8de1-catalog-content\") pod \"61cc98f1-a66d-488c-a076-914ada7e8de1\" (UID: \"61cc98f1-a66d-488c-a076-914ada7e8de1\") " Jan 30 07:38:25 crc kubenswrapper[4520]: I0130 07:38:25.042600 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61cc98f1-a66d-488c-a076-914ada7e8de1-utilities\") pod \"61cc98f1-a66d-488c-a076-914ada7e8de1\" (UID: \"61cc98f1-a66d-488c-a076-914ada7e8de1\") " Jan 30 07:38:25 crc kubenswrapper[4520]: I0130 07:38:25.042673 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vj77\" (UniqueName: \"kubernetes.io/projected/61cc98f1-a66d-488c-a076-914ada7e8de1-kube-api-access-9vj77\") pod \"61cc98f1-a66d-488c-a076-914ada7e8de1\" (UID: \"61cc98f1-a66d-488c-a076-914ada7e8de1\") " Jan 30 07:38:25 crc kubenswrapper[4520]: I0130 07:38:25.043122 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61cc98f1-a66d-488c-a076-914ada7e8de1-utilities" (OuterVolumeSpecName: "utilities") pod "61cc98f1-a66d-488c-a076-914ada7e8de1" (UID: "61cc98f1-a66d-488c-a076-914ada7e8de1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:38:25 crc kubenswrapper[4520]: I0130 07:38:25.043745 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61cc98f1-a66d-488c-a076-914ada7e8de1-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 07:38:25 crc kubenswrapper[4520]: I0130 07:38:25.050083 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61cc98f1-a66d-488c-a076-914ada7e8de1-kube-api-access-9vj77" (OuterVolumeSpecName: "kube-api-access-9vj77") pod "61cc98f1-a66d-488c-a076-914ada7e8de1" (UID: "61cc98f1-a66d-488c-a076-914ada7e8de1"). InnerVolumeSpecName "kube-api-access-9vj77". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:38:25 crc kubenswrapper[4520]: I0130 07:38:25.141625 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61cc98f1-a66d-488c-a076-914ada7e8de1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61cc98f1-a66d-488c-a076-914ada7e8de1" (UID: "61cc98f1-a66d-488c-a076-914ada7e8de1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:38:25 crc kubenswrapper[4520]: I0130 07:38:25.146583 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vj77\" (UniqueName: \"kubernetes.io/projected/61cc98f1-a66d-488c-a076-914ada7e8de1-kube-api-access-9vj77\") on node \"crc\" DevicePath \"\"" Jan 30 07:38:25 crc kubenswrapper[4520]: I0130 07:38:25.146622 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61cc98f1-a66d-488c-a076-914ada7e8de1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 07:38:25 crc kubenswrapper[4520]: I0130 07:38:25.489964 4520 generic.go:334] "Generic (PLEG): container finished" podID="61cc98f1-a66d-488c-a076-914ada7e8de1" containerID="d03f0ddb9c8e4d017f583b0bb490822f4a0a16f046cb6e323287f9723e71446f" exitCode=0 Jan 30 07:38:25 crc kubenswrapper[4520]: I0130 07:38:25.490015 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4d5j" Jan 30 07:38:25 crc kubenswrapper[4520]: I0130 07:38:25.490032 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4d5j" event={"ID":"61cc98f1-a66d-488c-a076-914ada7e8de1","Type":"ContainerDied","Data":"d03f0ddb9c8e4d017f583b0bb490822f4a0a16f046cb6e323287f9723e71446f"} Jan 30 07:38:25 crc kubenswrapper[4520]: I0130 07:38:25.491206 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4d5j" event={"ID":"61cc98f1-a66d-488c-a076-914ada7e8de1","Type":"ContainerDied","Data":"57511313b0e0029a28cbea244dfecd658bb879cb4a5d30f0adc8c3a94336800e"} Jan 30 07:38:25 crc kubenswrapper[4520]: I0130 07:38:25.491226 4520 scope.go:117] "RemoveContainer" containerID="d03f0ddb9c8e4d017f583b0bb490822f4a0a16f046cb6e323287f9723e71446f" Jan 30 07:38:25 crc kubenswrapper[4520]: I0130 07:38:25.515135 4520 scope.go:117] "RemoveContainer" containerID="8c9a553e46cfa67318ca30808c3f0aef9149580ad0651ea958c70f6b64f0659c" Jan 30 07:38:25 crc kubenswrapper[4520]: I0130 07:38:25.536190 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x4d5j"] Jan 30 07:38:25 crc kubenswrapper[4520]: I0130 07:38:25.543806 4520 scope.go:117] "RemoveContainer" containerID="117cdb71ac2913f2d468a016d9f8dfeb1b2e4da6872eac53b793589344ab3300" Jan 30 07:38:25 crc kubenswrapper[4520]: I0130 07:38:25.547873 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x4d5j"] Jan 30 07:38:25 crc kubenswrapper[4520]: I0130 07:38:25.577466 4520 scope.go:117] "RemoveContainer" containerID="d03f0ddb9c8e4d017f583b0bb490822f4a0a16f046cb6e323287f9723e71446f" Jan 30 07:38:25 crc kubenswrapper[4520]: E0130 07:38:25.578003 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d03f0ddb9c8e4d017f583b0bb490822f4a0a16f046cb6e323287f9723e71446f\": container with ID starting with d03f0ddb9c8e4d017f583b0bb490822f4a0a16f046cb6e323287f9723e71446f not found: ID does not exist" containerID="d03f0ddb9c8e4d017f583b0bb490822f4a0a16f046cb6e323287f9723e71446f" Jan 30 07:38:25 crc kubenswrapper[4520]: I0130 07:38:25.578051 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d03f0ddb9c8e4d017f583b0bb490822f4a0a16f046cb6e323287f9723e71446f"} err="failed to get container status \"d03f0ddb9c8e4d017f583b0bb490822f4a0a16f046cb6e323287f9723e71446f\": rpc error: code = NotFound desc = could not find container \"d03f0ddb9c8e4d017f583b0bb490822f4a0a16f046cb6e323287f9723e71446f\": container with ID starting with d03f0ddb9c8e4d017f583b0bb490822f4a0a16f046cb6e323287f9723e71446f not found: ID does not exist" Jan 30 07:38:25 crc kubenswrapper[4520]: I0130 07:38:25.578078 4520 scope.go:117] "RemoveContainer" containerID="8c9a553e46cfa67318ca30808c3f0aef9149580ad0651ea958c70f6b64f0659c" Jan 30 07:38:25 crc kubenswrapper[4520]: E0130 07:38:25.578530 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c9a553e46cfa67318ca30808c3f0aef9149580ad0651ea958c70f6b64f0659c\": container with ID starting with 8c9a553e46cfa67318ca30808c3f0aef9149580ad0651ea958c70f6b64f0659c not found: ID does not exist" containerID="8c9a553e46cfa67318ca30808c3f0aef9149580ad0651ea958c70f6b64f0659c" Jan 30 07:38:25 crc kubenswrapper[4520]: I0130 07:38:25.578632 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c9a553e46cfa67318ca30808c3f0aef9149580ad0651ea958c70f6b64f0659c"} err="failed to get container status \"8c9a553e46cfa67318ca30808c3f0aef9149580ad0651ea958c70f6b64f0659c\": rpc error: code = NotFound desc = could not find container \"8c9a553e46cfa67318ca30808c3f0aef9149580ad0651ea958c70f6b64f0659c\": container with ID starting with 8c9a553e46cfa67318ca30808c3f0aef9149580ad0651ea958c70f6b64f0659c not found: ID does not exist" Jan 30 07:38:25 crc kubenswrapper[4520]: I0130 07:38:25.578719 4520 scope.go:117] "RemoveContainer" containerID="117cdb71ac2913f2d468a016d9f8dfeb1b2e4da6872eac53b793589344ab3300" Jan 30 07:38:25 crc kubenswrapper[4520]: E0130 07:38:25.579119 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"117cdb71ac2913f2d468a016d9f8dfeb1b2e4da6872eac53b793589344ab3300\": container with ID starting with 117cdb71ac2913f2d468a016d9f8dfeb1b2e4da6872eac53b793589344ab3300 not found: ID does not exist" containerID="117cdb71ac2913f2d468a016d9f8dfeb1b2e4da6872eac53b793589344ab3300" Jan 30 07:38:25 crc kubenswrapper[4520]: I0130 07:38:25.579142 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"117cdb71ac2913f2d468a016d9f8dfeb1b2e4da6872eac53b793589344ab3300"} err="failed to get container status \"117cdb71ac2913f2d468a016d9f8dfeb1b2e4da6872eac53b793589344ab3300\": rpc error: code = NotFound desc = could not find container \"117cdb71ac2913f2d468a016d9f8dfeb1b2e4da6872eac53b793589344ab3300\": container with ID starting with 117cdb71ac2913f2d468a016d9f8dfeb1b2e4da6872eac53b793589344ab3300 not found: ID does not exist" Jan 30 07:38:26 crc kubenswrapper[4520]: I0130 07:38:26.507444 4520 generic.go:334] "Generic (PLEG): container finished" podID="9df01147-3505-4e88-b91c-671e2149ab19" containerID="c797e54de8f94f7563f521d0d23ccf8a9d1bb6a59c76866e0cb38ddd804574be" exitCode=137 Jan 30 07:38:26 crc kubenswrapper[4520]: I0130 07:38:26.507551 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9df01147-3505-4e88-b91c-671e2149ab19","Type":"ContainerDied","Data":"c797e54de8f94f7563f521d0d23ccf8a9d1bb6a59c76866e0cb38ddd804574be"} Jan 30 07:38:26 crc kubenswrapper[4520]: I0130 07:38:26.507880 4520 scope.go:117] "RemoveContainer" containerID="0739919db0e42ab2d21e594a295adc079dbd11ac4f42597ed8b5b399d87d6ee4" Jan 30 07:38:26 crc kubenswrapper[4520]: I0130 07:38:26.610400 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 07:38:26 crc kubenswrapper[4520]: I0130 07:38:26.701583 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df01147-3505-4e88-b91c-671e2149ab19-combined-ca-bundle\") pod \"9df01147-3505-4e88-b91c-671e2149ab19\" (UID: \"9df01147-3505-4e88-b91c-671e2149ab19\") " Jan 30 07:38:26 crc kubenswrapper[4520]: I0130 07:38:26.701949 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9df01147-3505-4e88-b91c-671e2149ab19-ceilometer-tls-certs\") pod \"9df01147-3505-4e88-b91c-671e2149ab19\" (UID: \"9df01147-3505-4e88-b91c-671e2149ab19\") " Jan 30 07:38:26 crc kubenswrapper[4520]: I0130 07:38:26.702183 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9df01147-3505-4e88-b91c-671e2149ab19-log-httpd\") pod \"9df01147-3505-4e88-b91c-671e2149ab19\" (UID: \"9df01147-3505-4e88-b91c-671e2149ab19\") " Jan 30 07:38:26 crc kubenswrapper[4520]: I0130 07:38:26.702418 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9df01147-3505-4e88-b91c-671e2149ab19-scripts\") pod \"9df01147-3505-4e88-b91c-671e2149ab19\" (UID: \"9df01147-3505-4e88-b91c-671e2149ab19\") " Jan 30 07:38:26 crc kubenswrapper[4520]: I0130 07:38:26.702485 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9df01147-3505-4e88-b91c-671e2149ab19-config-data\") pod \"9df01147-3505-4e88-b91c-671e2149ab19\" (UID: \"9df01147-3505-4e88-b91c-671e2149ab19\") " Jan 30 07:38:26 crc kubenswrapper[4520]: I0130 07:38:26.702586 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9df01147-3505-4e88-b91c-671e2149ab19-run-httpd\") pod \"9df01147-3505-4e88-b91c-671e2149ab19\" (UID: \"9df01147-3505-4e88-b91c-671e2149ab19\") " Jan 30 07:38:26 crc kubenswrapper[4520]: I0130 07:38:26.702625 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5499g\" (UniqueName: \"kubernetes.io/projected/9df01147-3505-4e88-b91c-671e2149ab19-kube-api-access-5499g\") pod \"9df01147-3505-4e88-b91c-671e2149ab19\" (UID: \"9df01147-3505-4e88-b91c-671e2149ab19\") " Jan 30 07:38:26 crc kubenswrapper[4520]: I0130 07:38:26.702648 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9df01147-3505-4e88-b91c-671e2149ab19-sg-core-conf-yaml\") pod \"9df01147-3505-4e88-b91c-671e2149ab19\" (UID: \"9df01147-3505-4e88-b91c-671e2149ab19\") " Jan 30 07:38:26 crc kubenswrapper[4520]: I0130 07:38:26.703988 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9df01147-3505-4e88-b91c-671e2149ab19-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9df01147-3505-4e88-b91c-671e2149ab19" (UID: "9df01147-3505-4e88-b91c-671e2149ab19"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:38:26 crc kubenswrapper[4520]: I0130 07:38:26.710007 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9df01147-3505-4e88-b91c-671e2149ab19-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9df01147-3505-4e88-b91c-671e2149ab19" (UID: "9df01147-3505-4e88-b91c-671e2149ab19"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:38:26 crc kubenswrapper[4520]: I0130 07:38:26.713335 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61cc98f1-a66d-488c-a076-914ada7e8de1" path="/var/lib/kubelet/pods/61cc98f1-a66d-488c-a076-914ada7e8de1/volumes" Jan 30 07:38:26 crc kubenswrapper[4520]: I0130 07:38:26.716575 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9df01147-3505-4e88-b91c-671e2149ab19-scripts" (OuterVolumeSpecName: "scripts") pod "9df01147-3505-4e88-b91c-671e2149ab19" (UID: "9df01147-3505-4e88-b91c-671e2149ab19"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:38:26 crc kubenswrapper[4520]: I0130 07:38:26.721135 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9df01147-3505-4e88-b91c-671e2149ab19-kube-api-access-5499g" (OuterVolumeSpecName: "kube-api-access-5499g") pod "9df01147-3505-4e88-b91c-671e2149ab19" (UID: "9df01147-3505-4e88-b91c-671e2149ab19"). InnerVolumeSpecName "kube-api-access-5499g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:38:26 crc kubenswrapper[4520]: I0130 07:38:26.739834 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9df01147-3505-4e88-b91c-671e2149ab19-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9df01147-3505-4e88-b91c-671e2149ab19" (UID: "9df01147-3505-4e88-b91c-671e2149ab19"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:38:26 crc kubenswrapper[4520]: I0130 07:38:26.800559 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9df01147-3505-4e88-b91c-671e2149ab19-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9df01147-3505-4e88-b91c-671e2149ab19" (UID: "9df01147-3505-4e88-b91c-671e2149ab19"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:38:26 crc kubenswrapper[4520]: I0130 07:38:26.806778 4520 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9df01147-3505-4e88-b91c-671e2149ab19-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 07:38:26 crc kubenswrapper[4520]: I0130 07:38:26.807887 4520 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9df01147-3505-4e88-b91c-671e2149ab19-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 07:38:26 crc kubenswrapper[4520]: I0130 07:38:26.807982 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5499g\" (UniqueName: \"kubernetes.io/projected/9df01147-3505-4e88-b91c-671e2149ab19-kube-api-access-5499g\") on node \"crc\" DevicePath \"\"" Jan 30 07:38:26 crc kubenswrapper[4520]: I0130 07:38:26.808041 4520 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9df01147-3505-4e88-b91c-671e2149ab19-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 07:38:26 crc kubenswrapper[4520]: I0130 07:38:26.808121 4520 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9df01147-3505-4e88-b91c-671e2149ab19-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 07:38:26 crc kubenswrapper[4520]: I0130 07:38:26.808170 4520 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9df01147-3505-4e88-b91c-671e2149ab19-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 07:38:26 crc kubenswrapper[4520]: I0130 07:38:26.836165 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9df01147-3505-4e88-b91c-671e2149ab19-config-data" (OuterVolumeSpecName: "config-data") pod "9df01147-3505-4e88-b91c-671e2149ab19" (UID: "9df01147-3505-4e88-b91c-671e2149ab19"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:38:26 crc kubenswrapper[4520]: I0130 07:38:26.852982 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9df01147-3505-4e88-b91c-671e2149ab19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9df01147-3505-4e88-b91c-671e2149ab19" (UID: "9df01147-3505-4e88-b91c-671e2149ab19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:38:26 crc kubenswrapper[4520]: I0130 07:38:26.911164 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9df01147-3505-4e88-b91c-671e2149ab19-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:38:26 crc kubenswrapper[4520]: I0130 07:38:26.911197 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df01147-3505-4e88-b91c-671e2149ab19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.521410 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9df01147-3505-4e88-b91c-671e2149ab19","Type":"ContainerDied","Data":"8612fc75d144620d6dbc7f98e29e737628baffb079c8242f1a41171c0fb0285b"} Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.521697 4520 scope.go:117] "RemoveContainer" containerID="c797e54de8f94f7563f521d0d23ccf8a9d1bb6a59c76866e0cb38ddd804574be" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.521806 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.551284 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.555817 4520 scope.go:117] "RemoveContainer" containerID="dd73d25d370ca14503c1034bad7c9cd70882e221992943d2f672c1265130f65f" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.558924 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.602109 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:38:27 crc kubenswrapper[4520]: E0130 07:38:27.602468 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea00fc18-fa83-4a0b-afbb-1faba49e4385" containerName="registry-server" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.602481 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea00fc18-fa83-4a0b-afbb-1faba49e4385" containerName="registry-server" Jan 30 07:38:27 crc kubenswrapper[4520]: E0130 07:38:27.602495 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9df01147-3505-4e88-b91c-671e2149ab19" containerName="ceilometer-central-agent" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.602500 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="9df01147-3505-4e88-b91c-671e2149ab19" containerName="ceilometer-central-agent" Jan 30 07:38:27 crc kubenswrapper[4520]: E0130 07:38:27.602533 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9df01147-3505-4e88-b91c-671e2149ab19" containerName="proxy-httpd" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.602539 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="9df01147-3505-4e88-b91c-671e2149ab19" containerName="proxy-httpd" Jan 30 07:38:27 crc kubenswrapper[4520]: E0130 07:38:27.602548 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed10ab17-c950-4e94-8c42-f94a51e47083" containerName="extract-content" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.602555 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed10ab17-c950-4e94-8c42-f94a51e47083" containerName="extract-content" Jan 30 07:38:27 crc kubenswrapper[4520]: E0130 07:38:27.602565 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9df01147-3505-4e88-b91c-671e2149ab19" containerName="ceilometer-central-agent" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.602571 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="9df01147-3505-4e88-b91c-671e2149ab19" containerName="ceilometer-central-agent" Jan 30 07:38:27 crc kubenswrapper[4520]: E0130 07:38:27.602582 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61cc98f1-a66d-488c-a076-914ada7e8de1" containerName="extract-content" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.602587 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="61cc98f1-a66d-488c-a076-914ada7e8de1" containerName="extract-content" Jan 30 07:38:27 crc kubenswrapper[4520]: E0130 07:38:27.602597 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9df01147-3505-4e88-b91c-671e2149ab19" containerName="sg-core" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.602602 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="9df01147-3505-4e88-b91c-671e2149ab19" containerName="sg-core" Jan 30 07:38:27 crc kubenswrapper[4520]: E0130 07:38:27.602615 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed10ab17-c950-4e94-8c42-f94a51e47083" containerName="extract-utilities" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.602620 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed10ab17-c950-4e94-8c42-f94a51e47083" containerName="extract-utilities" Jan 30 07:38:27 crc kubenswrapper[4520]: E0130 07:38:27.602626 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61cc98f1-a66d-488c-a076-914ada7e8de1" containerName="extract-utilities" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.602631 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="61cc98f1-a66d-488c-a076-914ada7e8de1" containerName="extract-utilities" Jan 30 07:38:27 crc kubenswrapper[4520]: E0130 07:38:27.602643 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61cc98f1-a66d-488c-a076-914ada7e8de1" containerName="registry-server" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.602648 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="61cc98f1-a66d-488c-a076-914ada7e8de1" containerName="registry-server" Jan 30 07:38:27 crc kubenswrapper[4520]: E0130 07:38:27.602660 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed10ab17-c950-4e94-8c42-f94a51e47083" containerName="registry-server" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.602665 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed10ab17-c950-4e94-8c42-f94a51e47083" containerName="registry-server" Jan 30 07:38:27 crc kubenswrapper[4520]: E0130 07:38:27.602676 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea00fc18-fa83-4a0b-afbb-1faba49e4385" containerName="extract-utilities" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.602682 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea00fc18-fa83-4a0b-afbb-1faba49e4385" containerName="extract-utilities" Jan 30 07:38:27 crc kubenswrapper[4520]: E0130 07:38:27.602691 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9df01147-3505-4e88-b91c-671e2149ab19" containerName="ceilometer-notification-agent" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.602696 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="9df01147-3505-4e88-b91c-671e2149ab19" containerName="ceilometer-notification-agent" Jan 30 07:38:27 crc kubenswrapper[4520]: E0130 07:38:27.602705 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea00fc18-fa83-4a0b-afbb-1faba49e4385" containerName="extract-content" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.602711 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea00fc18-fa83-4a0b-afbb-1faba49e4385" containerName="extract-content" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.604690 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="9df01147-3505-4e88-b91c-671e2149ab19" containerName="ceilometer-notification-agent" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.604713 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed10ab17-c950-4e94-8c42-f94a51e47083" containerName="registry-server" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.604726 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="61cc98f1-a66d-488c-a076-914ada7e8de1" containerName="registry-server" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.604743 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="9df01147-3505-4e88-b91c-671e2149ab19" containerName="sg-core" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.604752 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="9df01147-3505-4e88-b91c-671e2149ab19" containerName="ceilometer-central-agent" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.604764 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="9df01147-3505-4e88-b91c-671e2149ab19" containerName="proxy-httpd" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.604793 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea00fc18-fa83-4a0b-afbb-1faba49e4385" containerName="registry-server" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.605090 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="9df01147-3505-4e88-b91c-671e2149ab19" containerName="ceilometer-central-agent" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.607667 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.616445 4520 scope.go:117] "RemoveContainer" containerID="6a0eab6d2a46fa88f690d128a4a5ad7fe06e2be80d9292edbe570783e8d3a999" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.617790 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.617816 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.617865 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.631169 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.661675 4520 scope.go:117] "RemoveContainer" containerID="5f65b0709cbc49f21ab500e35c601379fbeed5bf2d95a64736a3a046c3ffaf9c" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.737402 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vt7n\" (UniqueName: \"kubernetes.io/projected/a4e159d7-3cd0-4cca-9ade-30ac1847b2b4-kube-api-access-5vt7n\") pod \"ceilometer-0\" (UID: \"a4e159d7-3cd0-4cca-9ade-30ac1847b2b4\") " pod="openstack/ceilometer-0" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.737576 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a4e159d7-3cd0-4cca-9ade-30ac1847b2b4-log-httpd\") pod \"ceilometer-0\" (UID: \"a4e159d7-3cd0-4cca-9ade-30ac1847b2b4\") " pod="openstack/ceilometer-0" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.737614 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4e159d7-3cd0-4cca-9ade-30ac1847b2b4-config-data\") pod \"ceilometer-0\" (UID: \"a4e159d7-3cd0-4cca-9ade-30ac1847b2b4\") " pod="openstack/ceilometer-0" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.737798 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4e159d7-3cd0-4cca-9ade-30ac1847b2b4-scripts\") pod \"ceilometer-0\" (UID: \"a4e159d7-3cd0-4cca-9ade-30ac1847b2b4\") " pod="openstack/ceilometer-0" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.738195 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a4e159d7-3cd0-4cca-9ade-30ac1847b2b4-run-httpd\") pod \"ceilometer-0\" (UID: \"a4e159d7-3cd0-4cca-9ade-30ac1847b2b4\") " pod="openstack/ceilometer-0" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.738274 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4e159d7-3cd0-4cca-9ade-30ac1847b2b4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a4e159d7-3cd0-4cca-9ade-30ac1847b2b4\") " pod="openstack/ceilometer-0" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.738373 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4e159d7-3cd0-4cca-9ade-30ac1847b2b4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a4e159d7-3cd0-4cca-9ade-30ac1847b2b4\") " pod="openstack/ceilometer-0" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.738439 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a4e159d7-3cd0-4cca-9ade-30ac1847b2b4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a4e159d7-3cd0-4cca-9ade-30ac1847b2b4\") " pod="openstack/ceilometer-0" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.840555 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a4e159d7-3cd0-4cca-9ade-30ac1847b2b4-run-httpd\") pod \"ceilometer-0\" (UID: \"a4e159d7-3cd0-4cca-9ade-30ac1847b2b4\") " pod="openstack/ceilometer-0" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.840610 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4e159d7-3cd0-4cca-9ade-30ac1847b2b4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a4e159d7-3cd0-4cca-9ade-30ac1847b2b4\") " pod="openstack/ceilometer-0" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.840657 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4e159d7-3cd0-4cca-9ade-30ac1847b2b4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a4e159d7-3cd0-4cca-9ade-30ac1847b2b4\") " pod="openstack/ceilometer-0" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.840690 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a4e159d7-3cd0-4cca-9ade-30ac1847b2b4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a4e159d7-3cd0-4cca-9ade-30ac1847b2b4\") " pod="openstack/ceilometer-0" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.840726 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vt7n\" (UniqueName: \"kubernetes.io/projected/a4e159d7-3cd0-4cca-9ade-30ac1847b2b4-kube-api-access-5vt7n\") pod \"ceilometer-0\" (UID: \"a4e159d7-3cd0-4cca-9ade-30ac1847b2b4\") " pod="openstack/ceilometer-0" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.840786 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a4e159d7-3cd0-4cca-9ade-30ac1847b2b4-log-httpd\") pod \"ceilometer-0\" (UID: \"a4e159d7-3cd0-4cca-9ade-30ac1847b2b4\") " pod="openstack/ceilometer-0" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.840807 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4e159d7-3cd0-4cca-9ade-30ac1847b2b4-config-data\") pod \"ceilometer-0\" (UID: \"a4e159d7-3cd0-4cca-9ade-30ac1847b2b4\") " pod="openstack/ceilometer-0" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.840845 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4e159d7-3cd0-4cca-9ade-30ac1847b2b4-scripts\") pod \"ceilometer-0\" (UID: \"a4e159d7-3cd0-4cca-9ade-30ac1847b2b4\") " pod="openstack/ceilometer-0" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.841509 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a4e159d7-3cd0-4cca-9ade-30ac1847b2b4-run-httpd\") pod \"ceilometer-0\" (UID: \"a4e159d7-3cd0-4cca-9ade-30ac1847b2b4\") " pod="openstack/ceilometer-0" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.842174 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a4e159d7-3cd0-4cca-9ade-30ac1847b2b4-log-httpd\") pod \"ceilometer-0\" (UID: \"a4e159d7-3cd0-4cca-9ade-30ac1847b2b4\") " pod="openstack/ceilometer-0" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.848224 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4e159d7-3cd0-4cca-9ade-30ac1847b2b4-scripts\") pod \"ceilometer-0\" (UID: \"a4e159d7-3cd0-4cca-9ade-30ac1847b2b4\") " pod="openstack/ceilometer-0" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.849503 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4e159d7-3cd0-4cca-9ade-30ac1847b2b4-config-data\") pod \"ceilometer-0\" (UID: \"a4e159d7-3cd0-4cca-9ade-30ac1847b2b4\") " pod="openstack/ceilometer-0" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.853007 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a4e159d7-3cd0-4cca-9ade-30ac1847b2b4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a4e159d7-3cd0-4cca-9ade-30ac1847b2b4\") " pod="openstack/ceilometer-0" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.858132 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vt7n\" (UniqueName: \"kubernetes.io/projected/a4e159d7-3cd0-4cca-9ade-30ac1847b2b4-kube-api-access-5vt7n\") pod \"ceilometer-0\" (UID: \"a4e159d7-3cd0-4cca-9ade-30ac1847b2b4\") " pod="openstack/ceilometer-0" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.861177 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4e159d7-3cd0-4cca-9ade-30ac1847b2b4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a4e159d7-3cd0-4cca-9ade-30ac1847b2b4\") " pod="openstack/ceilometer-0" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.869606 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4e159d7-3cd0-4cca-9ade-30ac1847b2b4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a4e159d7-3cd0-4cca-9ade-30ac1847b2b4\") " pod="openstack/ceilometer-0" Jan 30 07:38:27 crc kubenswrapper[4520]: I0130 07:38:27.933352 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 07:38:28 crc kubenswrapper[4520]: I0130 07:38:28.607341 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 07:38:28 crc kubenswrapper[4520]: I0130 07:38:28.696335 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9df01147-3505-4e88-b91c-671e2149ab19" path="/var/lib/kubelet/pods/9df01147-3505-4e88-b91c-671e2149ab19/volumes" Jan 30 07:38:29 crc kubenswrapper[4520]: I0130 07:38:29.538154 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a4e159d7-3cd0-4cca-9ade-30ac1847b2b4","Type":"ContainerStarted","Data":"679418ef73bc8754284ce70af3358193db3ba603dbd2e0e74f6a663e78f084fe"} Jan 30 07:38:30 crc kubenswrapper[4520]: I0130 07:38:30.552851 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a4e159d7-3cd0-4cca-9ade-30ac1847b2b4","Type":"ContainerStarted","Data":"4cdfe4edcec19f21828ab99ea76afb4268796637da97f72bf9cc598c40798866"} Jan 30 07:38:30 crc kubenswrapper[4520]: I0130 07:38:30.553268 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a4e159d7-3cd0-4cca-9ade-30ac1847b2b4","Type":"ContainerStarted","Data":"57b83b85b25b2df5e2ef0a027dcdcfd8a76bcf3d91cd5ca1deea7d17df4c1d10"} Jan 30 07:38:31 crc kubenswrapper[4520]: I0130 07:38:31.564062 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a4e159d7-3cd0-4cca-9ade-30ac1847b2b4","Type":"ContainerStarted","Data":"a2351fe1ed50f3d54393bac7778ec852e77716f6d6f6ff113594e7628c25b89d"} Jan 30 07:38:34 crc kubenswrapper[4520]: I0130 07:38:34.591875 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a4e159d7-3cd0-4cca-9ade-30ac1847b2b4","Type":"ContainerStarted","Data":"aef6bbab2b345a185043ca116c4720fea16c63d0274f99e5b0cb582e24112431"} Jan 30 07:38:34 crc kubenswrapper[4520]: I0130 07:38:34.593828 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 07:38:34 crc kubenswrapper[4520]: I0130 07:38:34.613102 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.739191796 podStartE2EDuration="7.613083363s" podCreationTimestamp="2026-01-30 07:38:27 +0000 UTC" firstStartedPulling="2026-01-30 07:38:28.631333309 +0000 UTC m=+3222.259685490" lastFinishedPulling="2026-01-30 07:38:33.505224887 +0000 UTC m=+3227.133577057" observedRunningTime="2026-01-30 07:38:34.607480789 +0000 UTC m=+3228.235832970" watchObservedRunningTime="2026-01-30 07:38:34.613083363 +0000 UTC m=+3228.241435544" Jan 30 07:38:34 crc kubenswrapper[4520]: I0130 07:38:34.639419 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 07:38:37 crc kubenswrapper[4520]: I0130 07:38:37.687310 4520 scope.go:117] "RemoveContainer" containerID="5e0e6d3d22c8852b924c77449e25c4f60aadf93185a67fb78587771f3642aa6b" Jan 30 07:38:37 crc kubenswrapper[4520]: E0130 07:38:37.699486 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:38:49 crc kubenswrapper[4520]: I0130 07:38:49.686635 4520 scope.go:117] "RemoveContainer" containerID="5e0e6d3d22c8852b924c77449e25c4f60aadf93185a67fb78587771f3642aa6b" Jan 30 07:38:49 crc kubenswrapper[4520]: E0130 07:38:49.687582 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:38:50 crc kubenswrapper[4520]: E0130 07:38:50.959665 4520 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.25.87:50368->192.168.25.87:39417: write tcp 192.168.25.87:50368->192.168.25.87:39417: write: broken pipe Jan 30 07:38:57 crc kubenswrapper[4520]: I0130 07:38:57.947653 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 07:39:01 crc kubenswrapper[4520]: I0130 07:39:01.662566 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-64d7b7f77f-brl5q"] Jan 30 07:39:01 crc kubenswrapper[4520]: I0130 07:39:01.665532 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64d7b7f77f-brl5q" Jan 30 07:39:01 crc kubenswrapper[4520]: I0130 07:39:01.677816 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64d7b7f77f-brl5q"] Jan 30 07:39:01 crc kubenswrapper[4520]: I0130 07:39:01.840148 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1dda99e9-b232-4721-b801-18c61513277a-httpd-config\") pod \"neutron-64d7b7f77f-brl5q\" (UID: \"1dda99e9-b232-4721-b801-18c61513277a\") " pod="openstack/neutron-64d7b7f77f-brl5q" Jan 30 07:39:01 crc kubenswrapper[4520]: I0130 07:39:01.840837 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dda99e9-b232-4721-b801-18c61513277a-ovndb-tls-certs\") pod \"neutron-64d7b7f77f-brl5q\" (UID: \"1dda99e9-b232-4721-b801-18c61513277a\") " pod="openstack/neutron-64d7b7f77f-brl5q" Jan 30 07:39:01 crc kubenswrapper[4520]: I0130 07:39:01.841045 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dda99e9-b232-4721-b801-18c61513277a-combined-ca-bundle\") pod \"neutron-64d7b7f77f-brl5q\" (UID: \"1dda99e9-b232-4721-b801-18c61513277a\") " pod="openstack/neutron-64d7b7f77f-brl5q" Jan 30 07:39:01 crc kubenswrapper[4520]: I0130 07:39:01.841113 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1dda99e9-b232-4721-b801-18c61513277a-config\") pod \"neutron-64d7b7f77f-brl5q\" (UID: \"1dda99e9-b232-4721-b801-18c61513277a\") " pod="openstack/neutron-64d7b7f77f-brl5q" Jan 30 07:39:01 crc kubenswrapper[4520]: I0130 07:39:01.841132 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dda99e9-b232-4721-b801-18c61513277a-public-tls-certs\") pod \"neutron-64d7b7f77f-brl5q\" (UID: \"1dda99e9-b232-4721-b801-18c61513277a\") " pod="openstack/neutron-64d7b7f77f-brl5q" Jan 30 07:39:01 crc kubenswrapper[4520]: I0130 07:39:01.841151 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7sn4\" (UniqueName: \"kubernetes.io/projected/1dda99e9-b232-4721-b801-18c61513277a-kube-api-access-r7sn4\") pod \"neutron-64d7b7f77f-brl5q\" (UID: \"1dda99e9-b232-4721-b801-18c61513277a\") " pod="openstack/neutron-64d7b7f77f-brl5q" Jan 30 07:39:01 crc kubenswrapper[4520]: I0130 07:39:01.841175 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dda99e9-b232-4721-b801-18c61513277a-internal-tls-certs\") pod \"neutron-64d7b7f77f-brl5q\" (UID: \"1dda99e9-b232-4721-b801-18c61513277a\") " pod="openstack/neutron-64d7b7f77f-brl5q" Jan 30 07:39:01 crc kubenswrapper[4520]: I0130 07:39:01.942549 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dda99e9-b232-4721-b801-18c61513277a-combined-ca-bundle\") pod \"neutron-64d7b7f77f-brl5q\" (UID: \"1dda99e9-b232-4721-b801-18c61513277a\") " pod="openstack/neutron-64d7b7f77f-brl5q" Jan 30 07:39:01 crc kubenswrapper[4520]: I0130 07:39:01.942626 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1dda99e9-b232-4721-b801-18c61513277a-config\") pod \"neutron-64d7b7f77f-brl5q\" (UID: \"1dda99e9-b232-4721-b801-18c61513277a\") " pod="openstack/neutron-64d7b7f77f-brl5q" Jan 30 07:39:01 crc kubenswrapper[4520]: I0130 07:39:01.942646 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dda99e9-b232-4721-b801-18c61513277a-public-tls-certs\") pod \"neutron-64d7b7f77f-brl5q\" (UID: \"1dda99e9-b232-4721-b801-18c61513277a\") " pod="openstack/neutron-64d7b7f77f-brl5q" Jan 30 07:39:01 crc kubenswrapper[4520]: I0130 07:39:01.942667 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7sn4\" (UniqueName: \"kubernetes.io/projected/1dda99e9-b232-4721-b801-18c61513277a-kube-api-access-r7sn4\") pod \"neutron-64d7b7f77f-brl5q\" (UID: \"1dda99e9-b232-4721-b801-18c61513277a\") " pod="openstack/neutron-64d7b7f77f-brl5q" Jan 30 07:39:01 crc kubenswrapper[4520]: I0130 07:39:01.942697 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dda99e9-b232-4721-b801-18c61513277a-internal-tls-certs\") pod \"neutron-64d7b7f77f-brl5q\" (UID: \"1dda99e9-b232-4721-b801-18c61513277a\") " pod="openstack/neutron-64d7b7f77f-brl5q" Jan 30 07:39:01 crc kubenswrapper[4520]: I0130 07:39:01.942743 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1dda99e9-b232-4721-b801-18c61513277a-httpd-config\") pod \"neutron-64d7b7f77f-brl5q\" (UID: \"1dda99e9-b232-4721-b801-18c61513277a\") " pod="openstack/neutron-64d7b7f77f-brl5q" Jan 30 07:39:01 crc kubenswrapper[4520]: I0130 07:39:01.942808 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dda99e9-b232-4721-b801-18c61513277a-ovndb-tls-certs\") pod \"neutron-64d7b7f77f-brl5q\" (UID: \"1dda99e9-b232-4721-b801-18c61513277a\") " pod="openstack/neutron-64d7b7f77f-brl5q" Jan 30 07:39:01 crc kubenswrapper[4520]: I0130 07:39:01.950118 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dda99e9-b232-4721-b801-18c61513277a-internal-tls-certs\") pod \"neutron-64d7b7f77f-brl5q\" (UID: \"1dda99e9-b232-4721-b801-18c61513277a\") " pod="openstack/neutron-64d7b7f77f-brl5q" Jan 30 07:39:01 crc kubenswrapper[4520]: I0130 07:39:01.950360 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dda99e9-b232-4721-b801-18c61513277a-combined-ca-bundle\") pod \"neutron-64d7b7f77f-brl5q\" (UID: \"1dda99e9-b232-4721-b801-18c61513277a\") " pod="openstack/neutron-64d7b7f77f-brl5q" Jan 30 07:39:01 crc kubenswrapper[4520]: I0130 07:39:01.951478 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1dda99e9-b232-4721-b801-18c61513277a-config\") pod \"neutron-64d7b7f77f-brl5q\" (UID: \"1dda99e9-b232-4721-b801-18c61513277a\") " pod="openstack/neutron-64d7b7f77f-brl5q" Jan 30 07:39:01 crc kubenswrapper[4520]: I0130 07:39:01.952201 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1dda99e9-b232-4721-b801-18c61513277a-httpd-config\") pod \"neutron-64d7b7f77f-brl5q\" (UID: \"1dda99e9-b232-4721-b801-18c61513277a\") " pod="openstack/neutron-64d7b7f77f-brl5q" Jan 30 07:39:01 crc kubenswrapper[4520]: I0130 07:39:01.953054 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dda99e9-b232-4721-b801-18c61513277a-ovndb-tls-certs\") pod \"neutron-64d7b7f77f-brl5q\" (UID: \"1dda99e9-b232-4721-b801-18c61513277a\") " pod="openstack/neutron-64d7b7f77f-brl5q" Jan 30 07:39:01 crc kubenswrapper[4520]: I0130 07:39:01.954133 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dda99e9-b232-4721-b801-18c61513277a-public-tls-certs\") pod \"neutron-64d7b7f77f-brl5q\" (UID: \"1dda99e9-b232-4721-b801-18c61513277a\") " pod="openstack/neutron-64d7b7f77f-brl5q" Jan 30 07:39:01 crc kubenswrapper[4520]: I0130 07:39:01.957327 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7sn4\" (UniqueName: \"kubernetes.io/projected/1dda99e9-b232-4721-b801-18c61513277a-kube-api-access-r7sn4\") pod \"neutron-64d7b7f77f-brl5q\" (UID: \"1dda99e9-b232-4721-b801-18c61513277a\") " pod="openstack/neutron-64d7b7f77f-brl5q" Jan 30 07:39:01 crc kubenswrapper[4520]: I0130 07:39:01.980067 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64d7b7f77f-brl5q" Jan 30 07:39:02 crc kubenswrapper[4520]: I0130 07:39:02.803639 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64d7b7f77f-brl5q"] Jan 30 07:39:02 crc kubenswrapper[4520]: I0130 07:39:02.838204 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64d7b7f77f-brl5q" event={"ID":"1dda99e9-b232-4721-b801-18c61513277a","Type":"ContainerStarted","Data":"0b00cd6f29364a67255ac6e3e5919caa80b753f0abf32c9d323fe6f32e215105"} Jan 30 07:39:03 crc kubenswrapper[4520]: I0130 07:39:03.848868 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64d7b7f77f-brl5q" event={"ID":"1dda99e9-b232-4721-b801-18c61513277a","Type":"ContainerStarted","Data":"bf64911203186a5f6d2cbf5275ea75c1912720659aca97a4fdb2edc32f399f6c"} Jan 30 07:39:03 crc kubenswrapper[4520]: I0130 07:39:03.849260 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64d7b7f77f-brl5q" event={"ID":"1dda99e9-b232-4721-b801-18c61513277a","Type":"ContainerStarted","Data":"46e8b12be233a7a79ddf09fed3c56fa1d7e5335ecf5f7fbd68eb747ebfd0145e"} Jan 30 07:39:03 crc kubenswrapper[4520]: I0130 07:39:03.849394 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-64d7b7f77f-brl5q" Jan 30 07:39:04 crc kubenswrapper[4520]: I0130 07:39:04.686536 4520 scope.go:117] "RemoveContainer" containerID="5e0e6d3d22c8852b924c77449e25c4f60aadf93185a67fb78587771f3642aa6b" Jan 30 07:39:04 crc kubenswrapper[4520]: E0130 07:39:04.686936 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:39:17 crc kubenswrapper[4520]: I0130 07:39:17.685788 4520 scope.go:117] "RemoveContainer" containerID="5e0e6d3d22c8852b924c77449e25c4f60aadf93185a67fb78587771f3642aa6b" Jan 30 07:39:17 crc kubenswrapper[4520]: E0130 07:39:17.686475 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:39:30 crc kubenswrapper[4520]: I0130 07:39:30.686067 4520 scope.go:117] "RemoveContainer" containerID="5e0e6d3d22c8852b924c77449e25c4f60aadf93185a67fb78587771f3642aa6b" Jan 30 07:39:30 crc kubenswrapper[4520]: E0130 07:39:30.686888 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:39:31 crc kubenswrapper[4520]: I0130 07:39:31.993563 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-64d7b7f77f-brl5q" Jan 30 07:39:32 crc kubenswrapper[4520]: I0130 07:39:32.016099 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-64d7b7f77f-brl5q" podStartSLOduration=31.016070938 podStartE2EDuration="31.016070938s" podCreationTimestamp="2026-01-30 07:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:39:03.864719745 +0000 UTC m=+3257.493071936" watchObservedRunningTime="2026-01-30 07:39:32.016070938 +0000 UTC m=+3285.644423119" Jan 30 07:39:32 crc kubenswrapper[4520]: I0130 07:39:32.069067 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7c56fc575-hzw9q"] Jan 30 07:39:32 crc kubenswrapper[4520]: I0130 07:39:32.069837 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7c56fc575-hzw9q" podUID="db846546-7955-4c19-87aa-188602e349e8" containerName="neutron-api" containerID="cri-o://8921dfc3e11781d685332b13442680b75ab1cb831349b43ddfc8b2906c3aca19" gracePeriod=30 Jan 30 07:39:32 crc kubenswrapper[4520]: I0130 07:39:32.069981 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7c56fc575-hzw9q" podUID="db846546-7955-4c19-87aa-188602e349e8" containerName="neutron-httpd" containerID="cri-o://9bf6cad630f2aefe0033f8aaa1af66013c93557e1dc8702f77ed55d6477522df" gracePeriod=30 Jan 30 07:39:33 crc kubenswrapper[4520]: I0130 07:39:33.086076 4520 generic.go:334] "Generic (PLEG): container finished" podID="db846546-7955-4c19-87aa-188602e349e8" containerID="9bf6cad630f2aefe0033f8aaa1af66013c93557e1dc8702f77ed55d6477522df" exitCode=0 Jan 30 07:39:33 crc kubenswrapper[4520]: I0130 07:39:33.086419 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c56fc575-hzw9q" event={"ID":"db846546-7955-4c19-87aa-188602e349e8","Type":"ContainerDied","Data":"9bf6cad630f2aefe0033f8aaa1af66013c93557e1dc8702f77ed55d6477522df"} Jan 30 07:39:42 crc kubenswrapper[4520]: I0130 07:39:42.838745 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c56fc575-hzw9q" Jan 30 07:39:42 crc kubenswrapper[4520]: I0130 07:39:42.937194 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db846546-7955-4c19-87aa-188602e349e8-combined-ca-bundle\") pod \"db846546-7955-4c19-87aa-188602e349e8\" (UID: \"db846546-7955-4c19-87aa-188602e349e8\") " Jan 30 07:39:42 crc kubenswrapper[4520]: I0130 07:39:42.937293 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/db846546-7955-4c19-87aa-188602e349e8-httpd-config\") pod \"db846546-7955-4c19-87aa-188602e349e8\" (UID: \"db846546-7955-4c19-87aa-188602e349e8\") " Jan 30 07:39:42 crc kubenswrapper[4520]: I0130 07:39:42.937437 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrx6j\" (UniqueName: \"kubernetes.io/projected/db846546-7955-4c19-87aa-188602e349e8-kube-api-access-wrx6j\") pod \"db846546-7955-4c19-87aa-188602e349e8\" (UID: \"db846546-7955-4c19-87aa-188602e349e8\") " Jan 30 07:39:42 crc kubenswrapper[4520]: I0130 07:39:42.937488 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db846546-7955-4c19-87aa-188602e349e8-ovndb-tls-certs\") pod \"db846546-7955-4c19-87aa-188602e349e8\" (UID: \"db846546-7955-4c19-87aa-188602e349e8\") " Jan 30 07:39:42 crc kubenswrapper[4520]: I0130 07:39:42.937940 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db846546-7955-4c19-87aa-188602e349e8-public-tls-certs\") pod \"db846546-7955-4c19-87aa-188602e349e8\" (UID: \"db846546-7955-4c19-87aa-188602e349e8\") " Jan 30 07:39:42 crc kubenswrapper[4520]: I0130 07:39:42.938140 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/db846546-7955-4c19-87aa-188602e349e8-config\") pod \"db846546-7955-4c19-87aa-188602e349e8\" (UID: \"db846546-7955-4c19-87aa-188602e349e8\") " Jan 30 07:39:42 crc kubenswrapper[4520]: I0130 07:39:42.938168 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db846546-7955-4c19-87aa-188602e349e8-internal-tls-certs\") pod \"db846546-7955-4c19-87aa-188602e349e8\" (UID: \"db846546-7955-4c19-87aa-188602e349e8\") " Jan 30 07:39:42 crc kubenswrapper[4520]: I0130 07:39:42.948291 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db846546-7955-4c19-87aa-188602e349e8-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "db846546-7955-4c19-87aa-188602e349e8" (UID: "db846546-7955-4c19-87aa-188602e349e8"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:39:42 crc kubenswrapper[4520]: I0130 07:39:42.951921 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db846546-7955-4c19-87aa-188602e349e8-kube-api-access-wrx6j" (OuterVolumeSpecName: "kube-api-access-wrx6j") pod "db846546-7955-4c19-87aa-188602e349e8" (UID: "db846546-7955-4c19-87aa-188602e349e8"). InnerVolumeSpecName "kube-api-access-wrx6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:39:42 crc kubenswrapper[4520]: I0130 07:39:42.986413 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db846546-7955-4c19-87aa-188602e349e8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "db846546-7955-4c19-87aa-188602e349e8" (UID: "db846546-7955-4c19-87aa-188602e349e8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:39:42 crc kubenswrapper[4520]: I0130 07:39:42.989952 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db846546-7955-4c19-87aa-188602e349e8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "db846546-7955-4c19-87aa-188602e349e8" (UID: "db846546-7955-4c19-87aa-188602e349e8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:39:42 crc kubenswrapper[4520]: I0130 07:39:42.995574 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db846546-7955-4c19-87aa-188602e349e8-config" (OuterVolumeSpecName: "config") pod "db846546-7955-4c19-87aa-188602e349e8" (UID: "db846546-7955-4c19-87aa-188602e349e8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:39:42 crc kubenswrapper[4520]: I0130 07:39:42.997255 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db846546-7955-4c19-87aa-188602e349e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db846546-7955-4c19-87aa-188602e349e8" (UID: "db846546-7955-4c19-87aa-188602e349e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:39:43 crc kubenswrapper[4520]: I0130 07:39:43.016692 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db846546-7955-4c19-87aa-188602e349e8-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "db846546-7955-4c19-87aa-188602e349e8" (UID: "db846546-7955-4c19-87aa-188602e349e8"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:39:43 crc kubenswrapper[4520]: I0130 07:39:43.044199 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/db846546-7955-4c19-87aa-188602e349e8-config\") on node \"crc\" DevicePath \"\"" Jan 30 07:39:43 crc kubenswrapper[4520]: I0130 07:39:43.044391 4520 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db846546-7955-4c19-87aa-188602e349e8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 07:39:43 crc kubenswrapper[4520]: I0130 07:39:43.044404 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db846546-7955-4c19-87aa-188602e349e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:39:43 crc kubenswrapper[4520]: I0130 07:39:43.044414 4520 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/db846546-7955-4c19-87aa-188602e349e8-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 07:39:43 crc kubenswrapper[4520]: I0130 07:39:43.044423 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrx6j\" (UniqueName: \"kubernetes.io/projected/db846546-7955-4c19-87aa-188602e349e8-kube-api-access-wrx6j\") on node \"crc\" DevicePath \"\"" Jan 30 07:39:43 crc kubenswrapper[4520]: I0130 07:39:43.044431 4520 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db846546-7955-4c19-87aa-188602e349e8-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 07:39:43 crc kubenswrapper[4520]: I0130 07:39:43.044439 4520 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db846546-7955-4c19-87aa-188602e349e8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 07:39:43 crc kubenswrapper[4520]: I0130 07:39:43.169775 4520 generic.go:334] "Generic (PLEG): container finished" podID="db846546-7955-4c19-87aa-188602e349e8" containerID="8921dfc3e11781d685332b13442680b75ab1cb831349b43ddfc8b2906c3aca19" exitCode=0 Jan 30 07:39:43 crc kubenswrapper[4520]: I0130 07:39:43.169798 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c56fc575-hzw9q" event={"ID":"db846546-7955-4c19-87aa-188602e349e8","Type":"ContainerDied","Data":"8921dfc3e11781d685332b13442680b75ab1cb831349b43ddfc8b2906c3aca19"} Jan 30 07:39:43 crc kubenswrapper[4520]: I0130 07:39:43.169824 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c56fc575-hzw9q" Jan 30 07:39:43 crc kubenswrapper[4520]: I0130 07:39:43.169836 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c56fc575-hzw9q" event={"ID":"db846546-7955-4c19-87aa-188602e349e8","Type":"ContainerDied","Data":"bb9f8bb9d8fac7f38faeec805ff3f4222289b54998d38976784db23ab326fb2f"} Jan 30 07:39:43 crc kubenswrapper[4520]: I0130 07:39:43.169873 4520 scope.go:117] "RemoveContainer" containerID="9bf6cad630f2aefe0033f8aaa1af66013c93557e1dc8702f77ed55d6477522df" Jan 30 07:39:43 crc kubenswrapper[4520]: I0130 07:39:43.200334 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7c56fc575-hzw9q"] Jan 30 07:39:43 crc kubenswrapper[4520]: I0130 07:39:43.201041 4520 scope.go:117] "RemoveContainer" containerID="8921dfc3e11781d685332b13442680b75ab1cb831349b43ddfc8b2906c3aca19" Jan 30 07:39:43 crc kubenswrapper[4520]: I0130 07:39:43.215786 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7c56fc575-hzw9q"] Jan 30 07:39:43 crc kubenswrapper[4520]: I0130 07:39:43.241100 4520 scope.go:117] "RemoveContainer" containerID="9bf6cad630f2aefe0033f8aaa1af66013c93557e1dc8702f77ed55d6477522df" Jan 30 07:39:43 crc kubenswrapper[4520]: E0130 07:39:43.247084 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bf6cad630f2aefe0033f8aaa1af66013c93557e1dc8702f77ed55d6477522df\": container with ID starting with 9bf6cad630f2aefe0033f8aaa1af66013c93557e1dc8702f77ed55d6477522df not found: ID does not exist" containerID="9bf6cad630f2aefe0033f8aaa1af66013c93557e1dc8702f77ed55d6477522df" Jan 30 07:39:43 crc kubenswrapper[4520]: I0130 07:39:43.247202 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bf6cad630f2aefe0033f8aaa1af66013c93557e1dc8702f77ed55d6477522df"} err="failed to get container status \"9bf6cad630f2aefe0033f8aaa1af66013c93557e1dc8702f77ed55d6477522df\": rpc error: code = NotFound desc = could not find container \"9bf6cad630f2aefe0033f8aaa1af66013c93557e1dc8702f77ed55d6477522df\": container with ID starting with 9bf6cad630f2aefe0033f8aaa1af66013c93557e1dc8702f77ed55d6477522df not found: ID does not exist" Jan 30 07:39:43 crc kubenswrapper[4520]: I0130 07:39:43.247227 4520 scope.go:117] "RemoveContainer" containerID="8921dfc3e11781d685332b13442680b75ab1cb831349b43ddfc8b2906c3aca19" Jan 30 07:39:43 crc kubenswrapper[4520]: E0130 07:39:43.247928 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8921dfc3e11781d685332b13442680b75ab1cb831349b43ddfc8b2906c3aca19\": container with ID starting with 8921dfc3e11781d685332b13442680b75ab1cb831349b43ddfc8b2906c3aca19 not found: ID does not exist" containerID="8921dfc3e11781d685332b13442680b75ab1cb831349b43ddfc8b2906c3aca19" Jan 30 07:39:43 crc kubenswrapper[4520]: I0130 07:39:43.247967 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8921dfc3e11781d685332b13442680b75ab1cb831349b43ddfc8b2906c3aca19"} err="failed to get container status \"8921dfc3e11781d685332b13442680b75ab1cb831349b43ddfc8b2906c3aca19\": rpc error: code = NotFound desc = could not find container \"8921dfc3e11781d685332b13442680b75ab1cb831349b43ddfc8b2906c3aca19\": container with ID starting with 8921dfc3e11781d685332b13442680b75ab1cb831349b43ddfc8b2906c3aca19 not found: ID does not exist" Jan 30 07:39:44 crc kubenswrapper[4520]: I0130 07:39:44.693899 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db846546-7955-4c19-87aa-188602e349e8" path="/var/lib/kubelet/pods/db846546-7955-4c19-87aa-188602e349e8/volumes" Jan 30 07:39:45 crc kubenswrapper[4520]: I0130 07:39:45.686588 4520 scope.go:117] "RemoveContainer" containerID="5e0e6d3d22c8852b924c77449e25c4f60aadf93185a67fb78587771f3642aa6b" Jan 30 07:39:45 crc kubenswrapper[4520]: E0130 07:39:45.686971 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:39:58 crc kubenswrapper[4520]: I0130 07:39:58.688928 4520 scope.go:117] "RemoveContainer" containerID="5e0e6d3d22c8852b924c77449e25c4f60aadf93185a67fb78587771f3642aa6b" Jan 30 07:39:58 crc kubenswrapper[4520]: E0130 07:39:58.689811 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:40:11 crc kubenswrapper[4520]: I0130 07:40:11.686010 4520 scope.go:117] "RemoveContainer" containerID="5e0e6d3d22c8852b924c77449e25c4f60aadf93185a67fb78587771f3642aa6b" Jan 30 07:40:11 crc kubenswrapper[4520]: E0130 07:40:11.687255 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:40:22 crc kubenswrapper[4520]: I0130 07:40:22.685857 4520 scope.go:117] "RemoveContainer" containerID="5e0e6d3d22c8852b924c77449e25c4f60aadf93185a67fb78587771f3642aa6b" Jan 30 07:40:22 crc kubenswrapper[4520]: E0130 07:40:22.686737 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:40:34 crc kubenswrapper[4520]: I0130 07:40:34.686352 4520 scope.go:117] "RemoveContainer" containerID="5e0e6d3d22c8852b924c77449e25c4f60aadf93185a67fb78587771f3642aa6b" Jan 30 07:40:34 crc kubenswrapper[4520]: E0130 07:40:34.687046 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:40:45 crc kubenswrapper[4520]: I0130 07:40:45.685030 4520 scope.go:117] "RemoveContainer" containerID="5e0e6d3d22c8852b924c77449e25c4f60aadf93185a67fb78587771f3642aa6b" Jan 30 07:40:45 crc kubenswrapper[4520]: E0130 07:40:45.685844 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:40:57 crc kubenswrapper[4520]: I0130 07:40:57.686540 4520 scope.go:117] "RemoveContainer" containerID="5e0e6d3d22c8852b924c77449e25c4f60aadf93185a67fb78587771f3642aa6b" Jan 30 07:40:57 crc kubenswrapper[4520]: E0130 07:40:57.687192 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:41:08 crc kubenswrapper[4520]: I0130 07:41:08.686036 4520 scope.go:117] "RemoveContainer" containerID="5e0e6d3d22c8852b924c77449e25c4f60aadf93185a67fb78587771f3642aa6b" Jan 30 07:41:08 crc kubenswrapper[4520]: E0130 07:41:08.686792 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:41:22 crc kubenswrapper[4520]: I0130 07:41:22.686073 4520 scope.go:117] "RemoveContainer" containerID="5e0e6d3d22c8852b924c77449e25c4f60aadf93185a67fb78587771f3642aa6b" Jan 30 07:41:22 crc kubenswrapper[4520]: E0130 07:41:22.686645 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:41:34 crc kubenswrapper[4520]: I0130 07:41:34.686262 4520 scope.go:117] "RemoveContainer" containerID="5e0e6d3d22c8852b924c77449e25c4f60aadf93185a67fb78587771f3642aa6b" Jan 30 07:41:34 crc kubenswrapper[4520]: E0130 07:41:34.687000 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:41:48 crc kubenswrapper[4520]: I0130 07:41:48.685760 4520 scope.go:117] "RemoveContainer" containerID="5e0e6d3d22c8852b924c77449e25c4f60aadf93185a67fb78587771f3642aa6b" Jan 30 07:41:48 crc kubenswrapper[4520]: E0130 07:41:48.686418 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:41:56 crc kubenswrapper[4520]: I0130 07:41:56.933712 4520 scope.go:117] "RemoveContainer" containerID="a708a5f137838545310620c1583bf8c4407e242ba89f2002c6c63c62f9cc5094" Jan 30 07:41:56 crc kubenswrapper[4520]: I0130 07:41:56.956402 4520 scope.go:117] "RemoveContainer" containerID="43801e34aed0c0ea8dcbbf0e195fa983e709978bdfcf52c345e007716a607bd6" Jan 30 07:42:03 crc kubenswrapper[4520]: I0130 07:42:03.686305 4520 scope.go:117] "RemoveContainer" containerID="5e0e6d3d22c8852b924c77449e25c4f60aadf93185a67fb78587771f3642aa6b" Jan 30 07:42:04 crc kubenswrapper[4520]: I0130 07:42:04.280244 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerStarted","Data":"456b8a5b16b91532f2415fc1c7a5797c992b95763d1a3bcd0cba514d2afb94ed"} Jan 30 07:42:57 crc kubenswrapper[4520]: I0130 07:42:57.022193 4520 scope.go:117] "RemoveContainer" containerID="1e0631a8d70c63516bd1ef308f72fc907ffe1372a93d815b1d5ab35b349d4696" Jan 30 07:44:27 crc kubenswrapper[4520]: I0130 07:44:27.794876 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 07:44:27 crc kubenswrapper[4520]: I0130 07:44:27.796196 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 07:44:57 crc kubenswrapper[4520]: I0130 07:44:57.793397 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 07:44:57 crc kubenswrapper[4520]: I0130 07:44:57.794187 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 07:45:00 crc kubenswrapper[4520]: I0130 07:45:00.264496 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495985-9dwtq"] Jan 30 07:45:00 crc kubenswrapper[4520]: E0130 07:45:00.266268 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db846546-7955-4c19-87aa-188602e349e8" containerName="neutron-httpd" Jan 30 07:45:00 crc kubenswrapper[4520]: I0130 07:45:00.266303 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="db846546-7955-4c19-87aa-188602e349e8" containerName="neutron-httpd" Jan 30 07:45:00 crc kubenswrapper[4520]: E0130 07:45:00.266321 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db846546-7955-4c19-87aa-188602e349e8" containerName="neutron-api" Jan 30 07:45:00 crc kubenswrapper[4520]: I0130 07:45:00.266328 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="db846546-7955-4c19-87aa-188602e349e8" containerName="neutron-api" Jan 30 07:45:00 crc kubenswrapper[4520]: I0130 07:45:00.267861 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="db846546-7955-4c19-87aa-188602e349e8" containerName="neutron-httpd" Jan 30 07:45:00 crc kubenswrapper[4520]: I0130 07:45:00.267890 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="db846546-7955-4c19-87aa-188602e349e8" containerName="neutron-api" Jan 30 07:45:00 crc kubenswrapper[4520]: I0130 07:45:00.269026 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495985-9dwtq" Jan 30 07:45:00 crc kubenswrapper[4520]: I0130 07:45:00.276686 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495985-9dwtq"] Jan 30 07:45:00 crc kubenswrapper[4520]: I0130 07:45:00.278649 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 07:45:00 crc kubenswrapper[4520]: I0130 07:45:00.278657 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 07:45:00 crc kubenswrapper[4520]: I0130 07:45:00.312459 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b35994b-b093-47d9-870f-207a997a2017-config-volume\") pod \"collect-profiles-29495985-9dwtq\" (UID: \"4b35994b-b093-47d9-870f-207a997a2017\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495985-9dwtq" Jan 30 07:45:00 crc kubenswrapper[4520]: I0130 07:45:00.312627 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsqx8\" (UniqueName: \"kubernetes.io/projected/4b35994b-b093-47d9-870f-207a997a2017-kube-api-access-rsqx8\") pod \"collect-profiles-29495985-9dwtq\" (UID: \"4b35994b-b093-47d9-870f-207a997a2017\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495985-9dwtq" Jan 30 07:45:00 crc kubenswrapper[4520]: I0130 07:45:00.312851 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b35994b-b093-47d9-870f-207a997a2017-secret-volume\") pod \"collect-profiles-29495985-9dwtq\" (UID: \"4b35994b-b093-47d9-870f-207a997a2017\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495985-9dwtq" Jan 30 07:45:00 crc kubenswrapper[4520]: I0130 07:45:00.414972 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b35994b-b093-47d9-870f-207a997a2017-config-volume\") pod \"collect-profiles-29495985-9dwtq\" (UID: \"4b35994b-b093-47d9-870f-207a997a2017\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495985-9dwtq" Jan 30 07:45:00 crc kubenswrapper[4520]: I0130 07:45:00.415050 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsqx8\" (UniqueName: \"kubernetes.io/projected/4b35994b-b093-47d9-870f-207a997a2017-kube-api-access-rsqx8\") pod \"collect-profiles-29495985-9dwtq\" (UID: \"4b35994b-b093-47d9-870f-207a997a2017\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495985-9dwtq" Jan 30 07:45:00 crc kubenswrapper[4520]: I0130 07:45:00.415133 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b35994b-b093-47d9-870f-207a997a2017-secret-volume\") pod \"collect-profiles-29495985-9dwtq\" (UID: \"4b35994b-b093-47d9-870f-207a997a2017\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495985-9dwtq" Jan 30 07:45:00 crc kubenswrapper[4520]: I0130 07:45:00.417366 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b35994b-b093-47d9-870f-207a997a2017-config-volume\") pod \"collect-profiles-29495985-9dwtq\" (UID: \"4b35994b-b093-47d9-870f-207a997a2017\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495985-9dwtq" Jan 30 07:45:00 crc kubenswrapper[4520]: I0130 07:45:00.424860 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b35994b-b093-47d9-870f-207a997a2017-secret-volume\") pod \"collect-profiles-29495985-9dwtq\" (UID: \"4b35994b-b093-47d9-870f-207a997a2017\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495985-9dwtq" Jan 30 07:45:00 crc kubenswrapper[4520]: I0130 07:45:00.431727 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsqx8\" (UniqueName: \"kubernetes.io/projected/4b35994b-b093-47d9-870f-207a997a2017-kube-api-access-rsqx8\") pod \"collect-profiles-29495985-9dwtq\" (UID: \"4b35994b-b093-47d9-870f-207a997a2017\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495985-9dwtq" Jan 30 07:45:00 crc kubenswrapper[4520]: I0130 07:45:00.588136 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495985-9dwtq" Jan 30 07:45:01 crc kubenswrapper[4520]: I0130 07:45:01.148701 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495985-9dwtq"] Jan 30 07:45:01 crc kubenswrapper[4520]: I0130 07:45:01.555872 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495985-9dwtq" event={"ID":"4b35994b-b093-47d9-870f-207a997a2017","Type":"ContainerStarted","Data":"89f95c983402e2c9180cb55f0e06d08bb623337186328d11eff57457328c9284"} Jan 30 07:45:01 crc kubenswrapper[4520]: I0130 07:45:01.555929 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495985-9dwtq" event={"ID":"4b35994b-b093-47d9-870f-207a997a2017","Type":"ContainerStarted","Data":"2ef1457ceb9a3bdcede2d85d6d35a137bc6a2145f15f991f2b4b7445d0ea0999"} Jan 30 07:45:01 crc kubenswrapper[4520]: I0130 07:45:01.569849 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29495985-9dwtq" podStartSLOduration=1.569825786 podStartE2EDuration="1.569825786s" podCreationTimestamp="2026-01-30 07:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:45:01.56624854 +0000 UTC m=+3615.194600720" watchObservedRunningTime="2026-01-30 07:45:01.569825786 +0000 UTC m=+3615.198177967" Jan 30 07:45:02 crc kubenswrapper[4520]: I0130 07:45:02.565790 4520 generic.go:334] "Generic (PLEG): container finished" podID="4b35994b-b093-47d9-870f-207a997a2017" containerID="89f95c983402e2c9180cb55f0e06d08bb623337186328d11eff57457328c9284" exitCode=0 Jan 30 07:45:02 crc kubenswrapper[4520]: I0130 07:45:02.565872 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495985-9dwtq" event={"ID":"4b35994b-b093-47d9-870f-207a997a2017","Type":"ContainerDied","Data":"89f95c983402e2c9180cb55f0e06d08bb623337186328d11eff57457328c9284"} Jan 30 07:45:03 crc kubenswrapper[4520]: I0130 07:45:03.909901 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495985-9dwtq" Jan 30 07:45:03 crc kubenswrapper[4520]: I0130 07:45:03.999339 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b35994b-b093-47d9-870f-207a997a2017-secret-volume\") pod \"4b35994b-b093-47d9-870f-207a997a2017\" (UID: \"4b35994b-b093-47d9-870f-207a997a2017\") " Jan 30 07:45:04 crc kubenswrapper[4520]: I0130 07:45:03.999992 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b35994b-b093-47d9-870f-207a997a2017-config-volume\") pod \"4b35994b-b093-47d9-870f-207a997a2017\" (UID: \"4b35994b-b093-47d9-870f-207a997a2017\") " Jan 30 07:45:04 crc kubenswrapper[4520]: I0130 07:45:04.000062 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsqx8\" (UniqueName: \"kubernetes.io/projected/4b35994b-b093-47d9-870f-207a997a2017-kube-api-access-rsqx8\") pod \"4b35994b-b093-47d9-870f-207a997a2017\" (UID: \"4b35994b-b093-47d9-870f-207a997a2017\") " Jan 30 07:45:04 crc kubenswrapper[4520]: I0130 07:45:04.000551 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b35994b-b093-47d9-870f-207a997a2017-config-volume" (OuterVolumeSpecName: "config-volume") pod "4b35994b-b093-47d9-870f-207a997a2017" (UID: "4b35994b-b093-47d9-870f-207a997a2017"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:45:04 crc kubenswrapper[4520]: I0130 07:45:04.000815 4520 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b35994b-b093-47d9-870f-207a997a2017-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 07:45:04 crc kubenswrapper[4520]: I0130 07:45:04.005359 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b35994b-b093-47d9-870f-207a997a2017-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4b35994b-b093-47d9-870f-207a997a2017" (UID: "4b35994b-b093-47d9-870f-207a997a2017"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:45:04 crc kubenswrapper[4520]: I0130 07:45:04.008661 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b35994b-b093-47d9-870f-207a997a2017-kube-api-access-rsqx8" (OuterVolumeSpecName: "kube-api-access-rsqx8") pod "4b35994b-b093-47d9-870f-207a997a2017" (UID: "4b35994b-b093-47d9-870f-207a997a2017"). InnerVolumeSpecName "kube-api-access-rsqx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:45:04 crc kubenswrapper[4520]: I0130 07:45:04.102416 4520 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b35994b-b093-47d9-870f-207a997a2017-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 07:45:04 crc kubenswrapper[4520]: I0130 07:45:04.102456 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsqx8\" (UniqueName: \"kubernetes.io/projected/4b35994b-b093-47d9-870f-207a997a2017-kube-api-access-rsqx8\") on node \"crc\" DevicePath \"\"" Jan 30 07:45:04 crc kubenswrapper[4520]: I0130 07:45:04.583406 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495985-9dwtq" event={"ID":"4b35994b-b093-47d9-870f-207a997a2017","Type":"ContainerDied","Data":"2ef1457ceb9a3bdcede2d85d6d35a137bc6a2145f15f991f2b4b7445d0ea0999"} Jan 30 07:45:04 crc kubenswrapper[4520]: I0130 07:45:04.583449 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ef1457ceb9a3bdcede2d85d6d35a137bc6a2145f15f991f2b4b7445d0ea0999" Jan 30 07:45:04 crc kubenswrapper[4520]: I0130 07:45:04.583488 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495985-9dwtq" Jan 30 07:45:04 crc kubenswrapper[4520]: I0130 07:45:04.648234 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495940-s889s"] Jan 30 07:45:04 crc kubenswrapper[4520]: I0130 07:45:04.654391 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495940-s889s"] Jan 30 07:45:04 crc kubenswrapper[4520]: I0130 07:45:04.694586 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="639c6c1f-c3ef-44ad-bfba-7aa257d311bf" path="/var/lib/kubelet/pods/639c6c1f-c3ef-44ad-bfba-7aa257d311bf/volumes" Jan 30 07:45:27 crc kubenswrapper[4520]: I0130 07:45:27.793251 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 07:45:27 crc kubenswrapper[4520]: I0130 07:45:27.793790 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 07:45:27 crc kubenswrapper[4520]: I0130 07:45:27.793829 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" Jan 30 07:45:27 crc kubenswrapper[4520]: I0130 07:45:27.794247 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"456b8a5b16b91532f2415fc1c7a5797c992b95763d1a3bcd0cba514d2afb94ed"} pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 07:45:27 crc kubenswrapper[4520]: I0130 07:45:27.794293 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" containerID="cri-o://456b8a5b16b91532f2415fc1c7a5797c992b95763d1a3bcd0cba514d2afb94ed" gracePeriod=600 Jan 30 07:45:28 crc kubenswrapper[4520]: I0130 07:45:28.787895 4520 generic.go:334] "Generic (PLEG): container finished" podID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerID="456b8a5b16b91532f2415fc1c7a5797c992b95763d1a3bcd0cba514d2afb94ed" exitCode=0 Jan 30 07:45:28 crc kubenswrapper[4520]: I0130 07:45:28.787974 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerDied","Data":"456b8a5b16b91532f2415fc1c7a5797c992b95763d1a3bcd0cba514d2afb94ed"} Jan 30 07:45:28 crc kubenswrapper[4520]: I0130 07:45:28.789930 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerStarted","Data":"b5e2c9a8b5c47ce871755d1f4913e2df4198ac5cdccac62ebb1db39109e44b6d"} Jan 30 07:45:28 crc kubenswrapper[4520]: I0130 07:45:28.789967 4520 scope.go:117] "RemoveContainer" containerID="5e0e6d3d22c8852b924c77449e25c4f60aadf93185a67fb78587771f3642aa6b" Jan 30 07:45:57 crc kubenswrapper[4520]: I0130 07:45:57.125111 4520 scope.go:117] "RemoveContainer" containerID="a6af4381c8d7ffd1e2f1f3755b7858d30f154cafa305db2c976428f5fa638957" Jan 30 07:47:03 crc kubenswrapper[4520]: I0130 07:47:03.669979 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b448d"] Jan 30 07:47:03 crc kubenswrapper[4520]: E0130 07:47:03.670736 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b35994b-b093-47d9-870f-207a997a2017" containerName="collect-profiles" Jan 30 07:47:03 crc kubenswrapper[4520]: I0130 07:47:03.670756 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b35994b-b093-47d9-870f-207a997a2017" containerName="collect-profiles" Jan 30 07:47:03 crc kubenswrapper[4520]: I0130 07:47:03.670920 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b35994b-b093-47d9-870f-207a997a2017" containerName="collect-profiles" Jan 30 07:47:03 crc kubenswrapper[4520]: I0130 07:47:03.673534 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b448d" Jan 30 07:47:03 crc kubenswrapper[4520]: I0130 07:47:03.679877 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b448d"] Jan 30 07:47:03 crc kubenswrapper[4520]: I0130 07:47:03.866665 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec1af5d1-9661-467d-aa18-7dfa0e07e458-catalog-content\") pod \"certified-operators-b448d\" (UID: \"ec1af5d1-9661-467d-aa18-7dfa0e07e458\") " pod="openshift-marketplace/certified-operators-b448d" Jan 30 07:47:03 crc kubenswrapper[4520]: I0130 07:47:03.866787 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwh6n\" (UniqueName: \"kubernetes.io/projected/ec1af5d1-9661-467d-aa18-7dfa0e07e458-kube-api-access-zwh6n\") pod \"certified-operators-b448d\" (UID: \"ec1af5d1-9661-467d-aa18-7dfa0e07e458\") " pod="openshift-marketplace/certified-operators-b448d" Jan 30 07:47:03 crc kubenswrapper[4520]: I0130 07:47:03.866809 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec1af5d1-9661-467d-aa18-7dfa0e07e458-utilities\") pod \"certified-operators-b448d\" (UID: \"ec1af5d1-9661-467d-aa18-7dfa0e07e458\") " pod="openshift-marketplace/certified-operators-b448d" Jan 30 07:47:03 crc kubenswrapper[4520]: I0130 07:47:03.968258 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwh6n\" (UniqueName: \"kubernetes.io/projected/ec1af5d1-9661-467d-aa18-7dfa0e07e458-kube-api-access-zwh6n\") pod \"certified-operators-b448d\" (UID: \"ec1af5d1-9661-467d-aa18-7dfa0e07e458\") " pod="openshift-marketplace/certified-operators-b448d" Jan 30 07:47:03 crc kubenswrapper[4520]: I0130 07:47:03.968305 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec1af5d1-9661-467d-aa18-7dfa0e07e458-utilities\") pod \"certified-operators-b448d\" (UID: \"ec1af5d1-9661-467d-aa18-7dfa0e07e458\") " pod="openshift-marketplace/certified-operators-b448d" Jan 30 07:47:03 crc kubenswrapper[4520]: I0130 07:47:03.968420 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec1af5d1-9661-467d-aa18-7dfa0e07e458-catalog-content\") pod \"certified-operators-b448d\" (UID: \"ec1af5d1-9661-467d-aa18-7dfa0e07e458\") " pod="openshift-marketplace/certified-operators-b448d" Jan 30 07:47:03 crc kubenswrapper[4520]: I0130 07:47:03.969348 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec1af5d1-9661-467d-aa18-7dfa0e07e458-utilities\") pod \"certified-operators-b448d\" (UID: \"ec1af5d1-9661-467d-aa18-7dfa0e07e458\") " pod="openshift-marketplace/certified-operators-b448d" Jan 30 07:47:03 crc kubenswrapper[4520]: I0130 07:47:03.969576 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec1af5d1-9661-467d-aa18-7dfa0e07e458-catalog-content\") pod \"certified-operators-b448d\" (UID: \"ec1af5d1-9661-467d-aa18-7dfa0e07e458\") " pod="openshift-marketplace/certified-operators-b448d" Jan 30 07:47:03 crc kubenswrapper[4520]: I0130 07:47:03.989648 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwh6n\" (UniqueName: \"kubernetes.io/projected/ec1af5d1-9661-467d-aa18-7dfa0e07e458-kube-api-access-zwh6n\") pod \"certified-operators-b448d\" (UID: \"ec1af5d1-9661-467d-aa18-7dfa0e07e458\") " pod="openshift-marketplace/certified-operators-b448d" Jan 30 07:47:04 crc kubenswrapper[4520]: I0130 07:47:04.289093 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b448d" Jan 30 07:47:04 crc kubenswrapper[4520]: I0130 07:47:04.735605 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b448d"] Jan 30 07:47:05 crc kubenswrapper[4520]: I0130 07:47:05.510028 4520 generic.go:334] "Generic (PLEG): container finished" podID="ec1af5d1-9661-467d-aa18-7dfa0e07e458" containerID="0174c173f6b0698c14293b6f32a75b490ab477da13c92ae93fc3be2ef319092f" exitCode=0 Jan 30 07:47:05 crc kubenswrapper[4520]: I0130 07:47:05.510076 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b448d" event={"ID":"ec1af5d1-9661-467d-aa18-7dfa0e07e458","Type":"ContainerDied","Data":"0174c173f6b0698c14293b6f32a75b490ab477da13c92ae93fc3be2ef319092f"} Jan 30 07:47:05 crc kubenswrapper[4520]: I0130 07:47:05.510104 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b448d" event={"ID":"ec1af5d1-9661-467d-aa18-7dfa0e07e458","Type":"ContainerStarted","Data":"a011aa5f813413523e243bf4d57b2c9621f0674dac01772b35b716d646251ec0"} Jan 30 07:47:05 crc kubenswrapper[4520]: I0130 07:47:05.513755 4520 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 07:47:06 crc kubenswrapper[4520]: I0130 07:47:06.531842 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b448d" event={"ID":"ec1af5d1-9661-467d-aa18-7dfa0e07e458","Type":"ContainerStarted","Data":"3cfb395fed8596e88c7ee96b8c76a325fdc5d926ea9eb625918a250e21670a88"} Jan 30 07:47:07 crc kubenswrapper[4520]: I0130 07:47:07.541075 4520 generic.go:334] "Generic (PLEG): container finished" podID="ec1af5d1-9661-467d-aa18-7dfa0e07e458" containerID="3cfb395fed8596e88c7ee96b8c76a325fdc5d926ea9eb625918a250e21670a88" exitCode=0 Jan 30 07:47:07 crc kubenswrapper[4520]: I0130 07:47:07.541256 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b448d" event={"ID":"ec1af5d1-9661-467d-aa18-7dfa0e07e458","Type":"ContainerDied","Data":"3cfb395fed8596e88c7ee96b8c76a325fdc5d926ea9eb625918a250e21670a88"} Jan 30 07:47:08 crc kubenswrapper[4520]: I0130 07:47:08.552979 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b448d" event={"ID":"ec1af5d1-9661-467d-aa18-7dfa0e07e458","Type":"ContainerStarted","Data":"59dd5873cf564d5a773a7ad33fe5c0abb6ed2e95d901eb7ec506c32b6b08dde3"} Jan 30 07:47:08 crc kubenswrapper[4520]: I0130 07:47:08.575291 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b448d" podStartSLOduration=3.038708453 podStartE2EDuration="5.575274139s" podCreationTimestamp="2026-01-30 07:47:03 +0000 UTC" firstStartedPulling="2026-01-30 07:47:05.511304174 +0000 UTC m=+3739.139656354" lastFinishedPulling="2026-01-30 07:47:08.047869859 +0000 UTC m=+3741.676222040" observedRunningTime="2026-01-30 07:47:08.568183547 +0000 UTC m=+3742.196535728" watchObservedRunningTime="2026-01-30 07:47:08.575274139 +0000 UTC m=+3742.203626321" Jan 30 07:47:14 crc kubenswrapper[4520]: I0130 07:47:14.289564 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b448d" Jan 30 07:47:14 crc kubenswrapper[4520]: I0130 07:47:14.289934 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b448d" Jan 30 07:47:14 crc kubenswrapper[4520]: I0130 07:47:14.324250 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b448d" Jan 30 07:47:14 crc kubenswrapper[4520]: I0130 07:47:14.627255 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b448d" Jan 30 07:47:14 crc kubenswrapper[4520]: I0130 07:47:14.689126 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b448d"] Jan 30 07:47:16 crc kubenswrapper[4520]: I0130 07:47:16.606687 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b448d" podUID="ec1af5d1-9661-467d-aa18-7dfa0e07e458" containerName="registry-server" containerID="cri-o://59dd5873cf564d5a773a7ad33fe5c0abb6ed2e95d901eb7ec506c32b6b08dde3" gracePeriod=2 Jan 30 07:47:16 crc kubenswrapper[4520]: I0130 07:47:16.977226 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b448d" Jan 30 07:47:16 crc kubenswrapper[4520]: I0130 07:47:16.989180 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec1af5d1-9661-467d-aa18-7dfa0e07e458-catalog-content\") pod \"ec1af5d1-9661-467d-aa18-7dfa0e07e458\" (UID: \"ec1af5d1-9661-467d-aa18-7dfa0e07e458\") " Jan 30 07:47:16 crc kubenswrapper[4520]: I0130 07:47:16.989429 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwh6n\" (UniqueName: \"kubernetes.io/projected/ec1af5d1-9661-467d-aa18-7dfa0e07e458-kube-api-access-zwh6n\") pod \"ec1af5d1-9661-467d-aa18-7dfa0e07e458\" (UID: \"ec1af5d1-9661-467d-aa18-7dfa0e07e458\") " Jan 30 07:47:16 crc kubenswrapper[4520]: I0130 07:47:16.989556 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec1af5d1-9661-467d-aa18-7dfa0e07e458-utilities\") pod \"ec1af5d1-9661-467d-aa18-7dfa0e07e458\" (UID: \"ec1af5d1-9661-467d-aa18-7dfa0e07e458\") " Jan 30 07:47:16 crc kubenswrapper[4520]: I0130 07:47:16.990254 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec1af5d1-9661-467d-aa18-7dfa0e07e458-utilities" (OuterVolumeSpecName: "utilities") pod "ec1af5d1-9661-467d-aa18-7dfa0e07e458" (UID: "ec1af5d1-9661-467d-aa18-7dfa0e07e458"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:47:16 crc kubenswrapper[4520]: I0130 07:47:16.990339 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec1af5d1-9661-467d-aa18-7dfa0e07e458-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 07:47:17 crc kubenswrapper[4520]: I0130 07:47:17.001782 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec1af5d1-9661-467d-aa18-7dfa0e07e458-kube-api-access-zwh6n" (OuterVolumeSpecName: "kube-api-access-zwh6n") pod "ec1af5d1-9661-467d-aa18-7dfa0e07e458" (UID: "ec1af5d1-9661-467d-aa18-7dfa0e07e458"). InnerVolumeSpecName "kube-api-access-zwh6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:47:17 crc kubenswrapper[4520]: I0130 07:47:17.040917 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec1af5d1-9661-467d-aa18-7dfa0e07e458-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec1af5d1-9661-467d-aa18-7dfa0e07e458" (UID: "ec1af5d1-9661-467d-aa18-7dfa0e07e458"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:47:17 crc kubenswrapper[4520]: I0130 07:47:17.092923 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwh6n\" (UniqueName: \"kubernetes.io/projected/ec1af5d1-9661-467d-aa18-7dfa0e07e458-kube-api-access-zwh6n\") on node \"crc\" DevicePath \"\"" Jan 30 07:47:17 crc kubenswrapper[4520]: I0130 07:47:17.092966 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec1af5d1-9661-467d-aa18-7dfa0e07e458-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 07:47:17 crc kubenswrapper[4520]: I0130 07:47:17.617158 4520 generic.go:334] "Generic (PLEG): container finished" podID="ec1af5d1-9661-467d-aa18-7dfa0e07e458" containerID="59dd5873cf564d5a773a7ad33fe5c0abb6ed2e95d901eb7ec506c32b6b08dde3" exitCode=0 Jan 30 07:47:17 crc kubenswrapper[4520]: I0130 07:47:17.617802 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b448d" event={"ID":"ec1af5d1-9661-467d-aa18-7dfa0e07e458","Type":"ContainerDied","Data":"59dd5873cf564d5a773a7ad33fe5c0abb6ed2e95d901eb7ec506c32b6b08dde3"} Jan 30 07:47:17 crc kubenswrapper[4520]: I0130 07:47:17.617894 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b448d" event={"ID":"ec1af5d1-9661-467d-aa18-7dfa0e07e458","Type":"ContainerDied","Data":"a011aa5f813413523e243bf4d57b2c9621f0674dac01772b35b716d646251ec0"} Jan 30 07:47:17 crc kubenswrapper[4520]: I0130 07:47:17.617970 4520 scope.go:117] "RemoveContainer" containerID="59dd5873cf564d5a773a7ad33fe5c0abb6ed2e95d901eb7ec506c32b6b08dde3" Jan 30 07:47:17 crc kubenswrapper[4520]: I0130 07:47:17.618153 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b448d" Jan 30 07:47:17 crc kubenswrapper[4520]: I0130 07:47:17.640322 4520 scope.go:117] "RemoveContainer" containerID="3cfb395fed8596e88c7ee96b8c76a325fdc5d926ea9eb625918a250e21670a88" Jan 30 07:47:17 crc kubenswrapper[4520]: I0130 07:47:17.650727 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b448d"] Jan 30 07:47:17 crc kubenswrapper[4520]: I0130 07:47:17.658495 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b448d"] Jan 30 07:47:17 crc kubenswrapper[4520]: I0130 07:47:17.663066 4520 scope.go:117] "RemoveContainer" containerID="0174c173f6b0698c14293b6f32a75b490ab477da13c92ae93fc3be2ef319092f" Jan 30 07:47:17 crc kubenswrapper[4520]: I0130 07:47:17.695437 4520 scope.go:117] "RemoveContainer" containerID="59dd5873cf564d5a773a7ad33fe5c0abb6ed2e95d901eb7ec506c32b6b08dde3" Jan 30 07:47:17 crc kubenswrapper[4520]: E0130 07:47:17.696306 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59dd5873cf564d5a773a7ad33fe5c0abb6ed2e95d901eb7ec506c32b6b08dde3\": container with ID starting with 59dd5873cf564d5a773a7ad33fe5c0abb6ed2e95d901eb7ec506c32b6b08dde3 not found: ID does not exist" containerID="59dd5873cf564d5a773a7ad33fe5c0abb6ed2e95d901eb7ec506c32b6b08dde3" Jan 30 07:47:17 crc kubenswrapper[4520]: I0130 07:47:17.696414 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59dd5873cf564d5a773a7ad33fe5c0abb6ed2e95d901eb7ec506c32b6b08dde3"} err="failed to get container status \"59dd5873cf564d5a773a7ad33fe5c0abb6ed2e95d901eb7ec506c32b6b08dde3\": rpc error: code = NotFound desc = could not find container \"59dd5873cf564d5a773a7ad33fe5c0abb6ed2e95d901eb7ec506c32b6b08dde3\": container with ID starting with 59dd5873cf564d5a773a7ad33fe5c0abb6ed2e95d901eb7ec506c32b6b08dde3 not found: ID does not exist" Jan 30 07:47:17 crc kubenswrapper[4520]: I0130 07:47:17.696494 4520 scope.go:117] "RemoveContainer" containerID="3cfb395fed8596e88c7ee96b8c76a325fdc5d926ea9eb625918a250e21670a88" Jan 30 07:47:17 crc kubenswrapper[4520]: E0130 07:47:17.697011 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cfb395fed8596e88c7ee96b8c76a325fdc5d926ea9eb625918a250e21670a88\": container with ID starting with 3cfb395fed8596e88c7ee96b8c76a325fdc5d926ea9eb625918a250e21670a88 not found: ID does not exist" containerID="3cfb395fed8596e88c7ee96b8c76a325fdc5d926ea9eb625918a250e21670a88" Jan 30 07:47:17 crc kubenswrapper[4520]: I0130 07:47:17.697052 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cfb395fed8596e88c7ee96b8c76a325fdc5d926ea9eb625918a250e21670a88"} err="failed to get container status \"3cfb395fed8596e88c7ee96b8c76a325fdc5d926ea9eb625918a250e21670a88\": rpc error: code = NotFound desc = could not find container \"3cfb395fed8596e88c7ee96b8c76a325fdc5d926ea9eb625918a250e21670a88\": container with ID starting with 3cfb395fed8596e88c7ee96b8c76a325fdc5d926ea9eb625918a250e21670a88 not found: ID does not exist" Jan 30 07:47:17 crc kubenswrapper[4520]: I0130 07:47:17.697077 4520 scope.go:117] "RemoveContainer" containerID="0174c173f6b0698c14293b6f32a75b490ab477da13c92ae93fc3be2ef319092f" Jan 30 07:47:17 crc kubenswrapper[4520]: E0130 07:47:17.697420 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0174c173f6b0698c14293b6f32a75b490ab477da13c92ae93fc3be2ef319092f\": container with ID starting with 0174c173f6b0698c14293b6f32a75b490ab477da13c92ae93fc3be2ef319092f not found: ID does not exist" containerID="0174c173f6b0698c14293b6f32a75b490ab477da13c92ae93fc3be2ef319092f" Jan 30 07:47:17 crc kubenswrapper[4520]: I0130 07:47:17.697449 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0174c173f6b0698c14293b6f32a75b490ab477da13c92ae93fc3be2ef319092f"} err="failed to get container status \"0174c173f6b0698c14293b6f32a75b490ab477da13c92ae93fc3be2ef319092f\": rpc error: code = NotFound desc = could not find container \"0174c173f6b0698c14293b6f32a75b490ab477da13c92ae93fc3be2ef319092f\": container with ID starting with 0174c173f6b0698c14293b6f32a75b490ab477da13c92ae93fc3be2ef319092f not found: ID does not exist" Jan 30 07:47:18 crc kubenswrapper[4520]: I0130 07:47:18.693838 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec1af5d1-9661-467d-aa18-7dfa0e07e458" path="/var/lib/kubelet/pods/ec1af5d1-9661-467d-aa18-7dfa0e07e458/volumes" Jan 30 07:47:19 crc kubenswrapper[4520]: I0130 07:47:19.959380 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pv2wq"] Jan 30 07:47:19 crc kubenswrapper[4520]: E0130 07:47:19.959919 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec1af5d1-9661-467d-aa18-7dfa0e07e458" containerName="extract-content" Jan 30 07:47:19 crc kubenswrapper[4520]: I0130 07:47:19.959931 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec1af5d1-9661-467d-aa18-7dfa0e07e458" containerName="extract-content" Jan 30 07:47:19 crc kubenswrapper[4520]: E0130 07:47:19.959945 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec1af5d1-9661-467d-aa18-7dfa0e07e458" containerName="registry-server" Jan 30 07:47:19 crc kubenswrapper[4520]: I0130 07:47:19.959951 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec1af5d1-9661-467d-aa18-7dfa0e07e458" containerName="registry-server" Jan 30 07:47:19 crc kubenswrapper[4520]: E0130 07:47:19.959961 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec1af5d1-9661-467d-aa18-7dfa0e07e458" containerName="extract-utilities" Jan 30 07:47:19 crc kubenswrapper[4520]: I0130 07:47:19.959967 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec1af5d1-9661-467d-aa18-7dfa0e07e458" containerName="extract-utilities" Jan 30 07:47:19 crc kubenswrapper[4520]: I0130 07:47:19.960133 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec1af5d1-9661-467d-aa18-7dfa0e07e458" containerName="registry-server" Jan 30 07:47:19 crc kubenswrapper[4520]: I0130 07:47:19.964045 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pv2wq" Jan 30 07:47:19 crc kubenswrapper[4520]: I0130 07:47:19.976684 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pv2wq"] Jan 30 07:47:20 crc kubenswrapper[4520]: I0130 07:47:20.041080 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a47b13-c4f0-4b55-910f-c47f0fa1ba03-catalog-content\") pod \"community-operators-pv2wq\" (UID: \"e1a47b13-c4f0-4b55-910f-c47f0fa1ba03\") " pod="openshift-marketplace/community-operators-pv2wq" Jan 30 07:47:20 crc kubenswrapper[4520]: I0130 07:47:20.041144 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnq55\" (UniqueName: \"kubernetes.io/projected/e1a47b13-c4f0-4b55-910f-c47f0fa1ba03-kube-api-access-gnq55\") pod \"community-operators-pv2wq\" (UID: \"e1a47b13-c4f0-4b55-910f-c47f0fa1ba03\") " pod="openshift-marketplace/community-operators-pv2wq" Jan 30 07:47:20 crc kubenswrapper[4520]: I0130 07:47:20.041177 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a47b13-c4f0-4b55-910f-c47f0fa1ba03-utilities\") pod \"community-operators-pv2wq\" (UID: \"e1a47b13-c4f0-4b55-910f-c47f0fa1ba03\") " pod="openshift-marketplace/community-operators-pv2wq" Jan 30 07:47:20 crc kubenswrapper[4520]: I0130 07:47:20.142273 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnq55\" (UniqueName: \"kubernetes.io/projected/e1a47b13-c4f0-4b55-910f-c47f0fa1ba03-kube-api-access-gnq55\") pod \"community-operators-pv2wq\" (UID: \"e1a47b13-c4f0-4b55-910f-c47f0fa1ba03\") " pod="openshift-marketplace/community-operators-pv2wq" Jan 30 07:47:20 crc kubenswrapper[4520]: I0130 07:47:20.142322 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a47b13-c4f0-4b55-910f-c47f0fa1ba03-utilities\") pod \"community-operators-pv2wq\" (UID: \"e1a47b13-c4f0-4b55-910f-c47f0fa1ba03\") " pod="openshift-marketplace/community-operators-pv2wq" Jan 30 07:47:20 crc kubenswrapper[4520]: I0130 07:47:20.142432 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a47b13-c4f0-4b55-910f-c47f0fa1ba03-catalog-content\") pod \"community-operators-pv2wq\" (UID: \"e1a47b13-c4f0-4b55-910f-c47f0fa1ba03\") " pod="openshift-marketplace/community-operators-pv2wq" Jan 30 07:47:20 crc kubenswrapper[4520]: I0130 07:47:20.142806 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a47b13-c4f0-4b55-910f-c47f0fa1ba03-utilities\") pod \"community-operators-pv2wq\" (UID: \"e1a47b13-c4f0-4b55-910f-c47f0fa1ba03\") " pod="openshift-marketplace/community-operators-pv2wq" Jan 30 07:47:20 crc kubenswrapper[4520]: I0130 07:47:20.142843 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a47b13-c4f0-4b55-910f-c47f0fa1ba03-catalog-content\") pod \"community-operators-pv2wq\" (UID: \"e1a47b13-c4f0-4b55-910f-c47f0fa1ba03\") " pod="openshift-marketplace/community-operators-pv2wq" Jan 30 07:47:20 crc kubenswrapper[4520]: I0130 07:47:20.157436 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnq55\" (UniqueName: \"kubernetes.io/projected/e1a47b13-c4f0-4b55-910f-c47f0fa1ba03-kube-api-access-gnq55\") pod \"community-operators-pv2wq\" (UID: \"e1a47b13-c4f0-4b55-910f-c47f0fa1ba03\") " pod="openshift-marketplace/community-operators-pv2wq" Jan 30 07:47:20 crc kubenswrapper[4520]: I0130 07:47:20.277752 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pv2wq" Jan 30 07:47:20 crc kubenswrapper[4520]: I0130 07:47:20.675206 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pv2wq"] Jan 30 07:47:21 crc kubenswrapper[4520]: I0130 07:47:21.648473 4520 generic.go:334] "Generic (PLEG): container finished" podID="e1a47b13-c4f0-4b55-910f-c47f0fa1ba03" containerID="30e2502e1e7de0ae313b4f50fd874843f0e4b3659bfd647bf46d9dba1af696e8" exitCode=0 Jan 30 07:47:21 crc kubenswrapper[4520]: I0130 07:47:21.648579 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pv2wq" event={"ID":"e1a47b13-c4f0-4b55-910f-c47f0fa1ba03","Type":"ContainerDied","Data":"30e2502e1e7de0ae313b4f50fd874843f0e4b3659bfd647bf46d9dba1af696e8"} Jan 30 07:47:21 crc kubenswrapper[4520]: I0130 07:47:21.648791 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pv2wq" event={"ID":"e1a47b13-c4f0-4b55-910f-c47f0fa1ba03","Type":"ContainerStarted","Data":"69fb897e779fdef7a74002422fcca60f199334d36e0d20c2cf4a2b7e2d32827a"} Jan 30 07:47:22 crc kubenswrapper[4520]: I0130 07:47:22.657831 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pv2wq" event={"ID":"e1a47b13-c4f0-4b55-910f-c47f0fa1ba03","Type":"ContainerStarted","Data":"f76e73361f3de5496354d438a2fcba380e98190cecd3520dc34515d881152e2c"} Jan 30 07:47:23 crc kubenswrapper[4520]: I0130 07:47:23.684616 4520 generic.go:334] "Generic (PLEG): container finished" podID="e1a47b13-c4f0-4b55-910f-c47f0fa1ba03" containerID="f76e73361f3de5496354d438a2fcba380e98190cecd3520dc34515d881152e2c" exitCode=0 Jan 30 07:47:23 crc kubenswrapper[4520]: I0130 07:47:23.684720 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pv2wq" event={"ID":"e1a47b13-c4f0-4b55-910f-c47f0fa1ba03","Type":"ContainerDied","Data":"f76e73361f3de5496354d438a2fcba380e98190cecd3520dc34515d881152e2c"} Jan 30 07:47:24 crc kubenswrapper[4520]: I0130 07:47:24.698143 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pv2wq" event={"ID":"e1a47b13-c4f0-4b55-910f-c47f0fa1ba03","Type":"ContainerStarted","Data":"1f7e7b8040fd2a0a78d817e526d36102589584b6a14475a98fa29f0d31ae290e"} Jan 30 07:47:24 crc kubenswrapper[4520]: I0130 07:47:24.713292 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pv2wq" podStartSLOduration=3.172220575 podStartE2EDuration="5.713273158s" podCreationTimestamp="2026-01-30 07:47:19 +0000 UTC" firstStartedPulling="2026-01-30 07:47:21.651095392 +0000 UTC m=+3755.279447573" lastFinishedPulling="2026-01-30 07:47:24.192147974 +0000 UTC m=+3757.820500156" observedRunningTime="2026-01-30 07:47:24.712186365 +0000 UTC m=+3758.340538546" watchObservedRunningTime="2026-01-30 07:47:24.713273158 +0000 UTC m=+3758.341625339" Jan 30 07:47:29 crc kubenswrapper[4520]: I0130 07:47:29.320415 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-znw8s"] Jan 30 07:47:29 crc kubenswrapper[4520]: I0130 07:47:29.323236 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-znw8s" Jan 30 07:47:29 crc kubenswrapper[4520]: I0130 07:47:29.329248 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bbc5384-fc43-420e-aa4f-4067bae237d9-catalog-content\") pod \"redhat-marketplace-znw8s\" (UID: \"5bbc5384-fc43-420e-aa4f-4067bae237d9\") " pod="openshift-marketplace/redhat-marketplace-znw8s" Jan 30 07:47:29 crc kubenswrapper[4520]: I0130 07:47:29.329393 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bbc5384-fc43-420e-aa4f-4067bae237d9-utilities\") pod \"redhat-marketplace-znw8s\" (UID: \"5bbc5384-fc43-420e-aa4f-4067bae237d9\") " pod="openshift-marketplace/redhat-marketplace-znw8s" Jan 30 07:47:29 crc kubenswrapper[4520]: I0130 07:47:29.329525 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xz49\" (UniqueName: \"kubernetes.io/projected/5bbc5384-fc43-420e-aa4f-4067bae237d9-kube-api-access-8xz49\") pod \"redhat-marketplace-znw8s\" (UID: \"5bbc5384-fc43-420e-aa4f-4067bae237d9\") " pod="openshift-marketplace/redhat-marketplace-znw8s" Jan 30 07:47:29 crc kubenswrapper[4520]: I0130 07:47:29.331201 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-znw8s"] Jan 30 07:47:29 crc kubenswrapper[4520]: I0130 07:47:29.432026 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bbc5384-fc43-420e-aa4f-4067bae237d9-catalog-content\") pod \"redhat-marketplace-znw8s\" (UID: \"5bbc5384-fc43-420e-aa4f-4067bae237d9\") " pod="openshift-marketplace/redhat-marketplace-znw8s" Jan 30 07:47:29 crc kubenswrapper[4520]: I0130 07:47:29.432109 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bbc5384-fc43-420e-aa4f-4067bae237d9-utilities\") pod \"redhat-marketplace-znw8s\" (UID: \"5bbc5384-fc43-420e-aa4f-4067bae237d9\") " pod="openshift-marketplace/redhat-marketplace-znw8s" Jan 30 07:47:29 crc kubenswrapper[4520]: I0130 07:47:29.432168 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xz49\" (UniqueName: \"kubernetes.io/projected/5bbc5384-fc43-420e-aa4f-4067bae237d9-kube-api-access-8xz49\") pod \"redhat-marketplace-znw8s\" (UID: \"5bbc5384-fc43-420e-aa4f-4067bae237d9\") " pod="openshift-marketplace/redhat-marketplace-znw8s" Jan 30 07:47:29 crc kubenswrapper[4520]: I0130 07:47:29.432565 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bbc5384-fc43-420e-aa4f-4067bae237d9-catalog-content\") pod \"redhat-marketplace-znw8s\" (UID: \"5bbc5384-fc43-420e-aa4f-4067bae237d9\") " pod="openshift-marketplace/redhat-marketplace-znw8s" Jan 30 07:47:29 crc kubenswrapper[4520]: I0130 07:47:29.432565 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bbc5384-fc43-420e-aa4f-4067bae237d9-utilities\") pod \"redhat-marketplace-znw8s\" (UID: \"5bbc5384-fc43-420e-aa4f-4067bae237d9\") " pod="openshift-marketplace/redhat-marketplace-znw8s" Jan 30 07:47:29 crc kubenswrapper[4520]: I0130 07:47:29.477345 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xz49\" (UniqueName: \"kubernetes.io/projected/5bbc5384-fc43-420e-aa4f-4067bae237d9-kube-api-access-8xz49\") pod \"redhat-marketplace-znw8s\" (UID: \"5bbc5384-fc43-420e-aa4f-4067bae237d9\") " pod="openshift-marketplace/redhat-marketplace-znw8s" Jan 30 07:47:29 crc kubenswrapper[4520]: I0130 07:47:29.638495 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-znw8s" Jan 30 07:47:30 crc kubenswrapper[4520]: I0130 07:47:30.056736 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-znw8s"] Jan 30 07:47:30 crc kubenswrapper[4520]: W0130 07:47:30.068213 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bbc5384_fc43_420e_aa4f_4067bae237d9.slice/crio-d5782300ca5233657101f0c0952bbb6d7427df3a5b5b33cbc140d69b103d12a1 WatchSource:0}: Error finding container d5782300ca5233657101f0c0952bbb6d7427df3a5b5b33cbc140d69b103d12a1: Status 404 returned error can't find the container with id d5782300ca5233657101f0c0952bbb6d7427df3a5b5b33cbc140d69b103d12a1 Jan 30 07:47:30 crc kubenswrapper[4520]: I0130 07:47:30.278377 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pv2wq" Jan 30 07:47:30 crc kubenswrapper[4520]: I0130 07:47:30.278437 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pv2wq" Jan 30 07:47:30 crc kubenswrapper[4520]: I0130 07:47:30.321972 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pv2wq" Jan 30 07:47:30 crc kubenswrapper[4520]: I0130 07:47:30.737330 4520 generic.go:334] "Generic (PLEG): container finished" podID="5bbc5384-fc43-420e-aa4f-4067bae237d9" containerID="39a5da9b0bcb52a4312d68d7ab915c61b25ced412724aa0246ff78ef449ea5b4" exitCode=0 Jan 30 07:47:30 crc kubenswrapper[4520]: I0130 07:47:30.737419 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znw8s" event={"ID":"5bbc5384-fc43-420e-aa4f-4067bae237d9","Type":"ContainerDied","Data":"39a5da9b0bcb52a4312d68d7ab915c61b25ced412724aa0246ff78ef449ea5b4"} Jan 30 07:47:30 crc kubenswrapper[4520]: I0130 07:47:30.737731 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znw8s" event={"ID":"5bbc5384-fc43-420e-aa4f-4067bae237d9","Type":"ContainerStarted","Data":"d5782300ca5233657101f0c0952bbb6d7427df3a5b5b33cbc140d69b103d12a1"} Jan 30 07:47:30 crc kubenswrapper[4520]: I0130 07:47:30.777402 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pv2wq" Jan 30 07:47:31 crc kubenswrapper[4520]: I0130 07:47:31.758399 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znw8s" event={"ID":"5bbc5384-fc43-420e-aa4f-4067bae237d9","Type":"ContainerStarted","Data":"8196c9fe084228b93a2ba23771812409aef78da12396848fd45ecbb8c03a311a"} Jan 30 07:47:32 crc kubenswrapper[4520]: I0130 07:47:32.724783 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pv2wq"] Jan 30 07:47:32 crc kubenswrapper[4520]: I0130 07:47:32.767758 4520 generic.go:334] "Generic (PLEG): container finished" podID="5bbc5384-fc43-420e-aa4f-4067bae237d9" containerID="8196c9fe084228b93a2ba23771812409aef78da12396848fd45ecbb8c03a311a" exitCode=0 Jan 30 07:47:32 crc kubenswrapper[4520]: I0130 07:47:32.767849 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znw8s" event={"ID":"5bbc5384-fc43-420e-aa4f-4067bae237d9","Type":"ContainerDied","Data":"8196c9fe084228b93a2ba23771812409aef78da12396848fd45ecbb8c03a311a"} Jan 30 07:47:32 crc kubenswrapper[4520]: I0130 07:47:32.768037 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pv2wq" podUID="e1a47b13-c4f0-4b55-910f-c47f0fa1ba03" containerName="registry-server" containerID="cri-o://1f7e7b8040fd2a0a78d817e526d36102589584b6a14475a98fa29f0d31ae290e" gracePeriod=2 Jan 30 07:47:33 crc kubenswrapper[4520]: I0130 07:47:33.215341 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pv2wq" Jan 30 07:47:33 crc kubenswrapper[4520]: I0130 07:47:33.216032 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a47b13-c4f0-4b55-910f-c47f0fa1ba03-utilities\") pod \"e1a47b13-c4f0-4b55-910f-c47f0fa1ba03\" (UID: \"e1a47b13-c4f0-4b55-910f-c47f0fa1ba03\") " Jan 30 07:47:33 crc kubenswrapper[4520]: I0130 07:47:33.216780 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1a47b13-c4f0-4b55-910f-c47f0fa1ba03-utilities" (OuterVolumeSpecName: "utilities") pod "e1a47b13-c4f0-4b55-910f-c47f0fa1ba03" (UID: "e1a47b13-c4f0-4b55-910f-c47f0fa1ba03"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:47:33 crc kubenswrapper[4520]: I0130 07:47:33.318009 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a47b13-c4f0-4b55-910f-c47f0fa1ba03-catalog-content\") pod \"e1a47b13-c4f0-4b55-910f-c47f0fa1ba03\" (UID: \"e1a47b13-c4f0-4b55-910f-c47f0fa1ba03\") " Jan 30 07:47:33 crc kubenswrapper[4520]: I0130 07:47:33.318102 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnq55\" (UniqueName: \"kubernetes.io/projected/e1a47b13-c4f0-4b55-910f-c47f0fa1ba03-kube-api-access-gnq55\") pod \"e1a47b13-c4f0-4b55-910f-c47f0fa1ba03\" (UID: \"e1a47b13-c4f0-4b55-910f-c47f0fa1ba03\") " Jan 30 07:47:33 crc kubenswrapper[4520]: I0130 07:47:33.318659 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a47b13-c4f0-4b55-910f-c47f0fa1ba03-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 07:47:33 crc kubenswrapper[4520]: I0130 07:47:33.327641 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1a47b13-c4f0-4b55-910f-c47f0fa1ba03-kube-api-access-gnq55" (OuterVolumeSpecName: "kube-api-access-gnq55") pod "e1a47b13-c4f0-4b55-910f-c47f0fa1ba03" (UID: "e1a47b13-c4f0-4b55-910f-c47f0fa1ba03"). InnerVolumeSpecName "kube-api-access-gnq55". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:47:33 crc kubenswrapper[4520]: I0130 07:47:33.373577 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1a47b13-c4f0-4b55-910f-c47f0fa1ba03-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1a47b13-c4f0-4b55-910f-c47f0fa1ba03" (UID: "e1a47b13-c4f0-4b55-910f-c47f0fa1ba03"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:47:33 crc kubenswrapper[4520]: I0130 07:47:33.420773 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a47b13-c4f0-4b55-910f-c47f0fa1ba03-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 07:47:33 crc kubenswrapper[4520]: I0130 07:47:33.420910 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnq55\" (UniqueName: \"kubernetes.io/projected/e1a47b13-c4f0-4b55-910f-c47f0fa1ba03-kube-api-access-gnq55\") on node \"crc\" DevicePath \"\"" Jan 30 07:47:33 crc kubenswrapper[4520]: I0130 07:47:33.776653 4520 generic.go:334] "Generic (PLEG): container finished" podID="e1a47b13-c4f0-4b55-910f-c47f0fa1ba03" containerID="1f7e7b8040fd2a0a78d817e526d36102589584b6a14475a98fa29f0d31ae290e" exitCode=0 Jan 30 07:47:33 crc kubenswrapper[4520]: I0130 07:47:33.776729 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pv2wq" Jan 30 07:47:33 crc kubenswrapper[4520]: I0130 07:47:33.776756 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pv2wq" event={"ID":"e1a47b13-c4f0-4b55-910f-c47f0fa1ba03","Type":"ContainerDied","Data":"1f7e7b8040fd2a0a78d817e526d36102589584b6a14475a98fa29f0d31ae290e"} Jan 30 07:47:33 crc kubenswrapper[4520]: I0130 07:47:33.777112 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pv2wq" event={"ID":"e1a47b13-c4f0-4b55-910f-c47f0fa1ba03","Type":"ContainerDied","Data":"69fb897e779fdef7a74002422fcca60f199334d36e0d20c2cf4a2b7e2d32827a"} Jan 30 07:47:33 crc kubenswrapper[4520]: I0130 07:47:33.777135 4520 scope.go:117] "RemoveContainer" containerID="1f7e7b8040fd2a0a78d817e526d36102589584b6a14475a98fa29f0d31ae290e" Jan 30 07:47:33 crc kubenswrapper[4520]: I0130 07:47:33.780502 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znw8s" event={"ID":"5bbc5384-fc43-420e-aa4f-4067bae237d9","Type":"ContainerStarted","Data":"95d40a5b277e88890743a54e0abe89410f7deb77d85c0e83293db5f80b72db38"} Jan 30 07:47:33 crc kubenswrapper[4520]: I0130 07:47:33.800300 4520 scope.go:117] "RemoveContainer" containerID="f76e73361f3de5496354d438a2fcba380e98190cecd3520dc34515d881152e2c" Jan 30 07:47:33 crc kubenswrapper[4520]: I0130 07:47:33.810583 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-znw8s" podStartSLOduration=2.301569116 podStartE2EDuration="4.810566116s" podCreationTimestamp="2026-01-30 07:47:29 +0000 UTC" firstStartedPulling="2026-01-30 07:47:30.739167493 +0000 UTC m=+3764.367519674" lastFinishedPulling="2026-01-30 07:47:33.248164493 +0000 UTC m=+3766.876516674" observedRunningTime="2026-01-30 07:47:33.800730352 +0000 UTC m=+3767.429082533" watchObservedRunningTime="2026-01-30 07:47:33.810566116 +0000 UTC m=+3767.438918296" Jan 30 07:47:33 crc kubenswrapper[4520]: I0130 07:47:33.841247 4520 scope.go:117] "RemoveContainer" containerID="30e2502e1e7de0ae313b4f50fd874843f0e4b3659bfd647bf46d9dba1af696e8" Jan 30 07:47:33 crc kubenswrapper[4520]: I0130 07:47:33.845983 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pv2wq"] Jan 30 07:47:33 crc kubenswrapper[4520]: I0130 07:47:33.856064 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pv2wq"] Jan 30 07:47:33 crc kubenswrapper[4520]: I0130 07:47:33.860375 4520 scope.go:117] "RemoveContainer" containerID="1f7e7b8040fd2a0a78d817e526d36102589584b6a14475a98fa29f0d31ae290e" Jan 30 07:47:33 crc kubenswrapper[4520]: E0130 07:47:33.860808 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f7e7b8040fd2a0a78d817e526d36102589584b6a14475a98fa29f0d31ae290e\": container with ID starting with 1f7e7b8040fd2a0a78d817e526d36102589584b6a14475a98fa29f0d31ae290e not found: ID does not exist" containerID="1f7e7b8040fd2a0a78d817e526d36102589584b6a14475a98fa29f0d31ae290e" Jan 30 07:47:33 crc kubenswrapper[4520]: I0130 07:47:33.860840 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f7e7b8040fd2a0a78d817e526d36102589584b6a14475a98fa29f0d31ae290e"} err="failed to get container status \"1f7e7b8040fd2a0a78d817e526d36102589584b6a14475a98fa29f0d31ae290e\": rpc error: code = NotFound desc = could not find container \"1f7e7b8040fd2a0a78d817e526d36102589584b6a14475a98fa29f0d31ae290e\": container with ID starting with 1f7e7b8040fd2a0a78d817e526d36102589584b6a14475a98fa29f0d31ae290e not found: ID does not exist" Jan 30 07:47:33 crc kubenswrapper[4520]: I0130 07:47:33.860863 4520 scope.go:117] "RemoveContainer" containerID="f76e73361f3de5496354d438a2fcba380e98190cecd3520dc34515d881152e2c" Jan 30 07:47:33 crc kubenswrapper[4520]: E0130 07:47:33.861248 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f76e73361f3de5496354d438a2fcba380e98190cecd3520dc34515d881152e2c\": container with ID starting with f76e73361f3de5496354d438a2fcba380e98190cecd3520dc34515d881152e2c not found: ID does not exist" containerID="f76e73361f3de5496354d438a2fcba380e98190cecd3520dc34515d881152e2c" Jan 30 07:47:33 crc kubenswrapper[4520]: I0130 07:47:33.861285 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f76e73361f3de5496354d438a2fcba380e98190cecd3520dc34515d881152e2c"} err="failed to get container status \"f76e73361f3de5496354d438a2fcba380e98190cecd3520dc34515d881152e2c\": rpc error: code = NotFound desc = could not find container \"f76e73361f3de5496354d438a2fcba380e98190cecd3520dc34515d881152e2c\": container with ID starting with f76e73361f3de5496354d438a2fcba380e98190cecd3520dc34515d881152e2c not found: ID does not exist" Jan 30 07:47:33 crc kubenswrapper[4520]: I0130 07:47:33.861311 4520 scope.go:117] "RemoveContainer" containerID="30e2502e1e7de0ae313b4f50fd874843f0e4b3659bfd647bf46d9dba1af696e8" Jan 30 07:47:33 crc kubenswrapper[4520]: E0130 07:47:33.861683 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30e2502e1e7de0ae313b4f50fd874843f0e4b3659bfd647bf46d9dba1af696e8\": container with ID starting with 30e2502e1e7de0ae313b4f50fd874843f0e4b3659bfd647bf46d9dba1af696e8 not found: ID does not exist" containerID="30e2502e1e7de0ae313b4f50fd874843f0e4b3659bfd647bf46d9dba1af696e8" Jan 30 07:47:33 crc kubenswrapper[4520]: I0130 07:47:33.861719 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30e2502e1e7de0ae313b4f50fd874843f0e4b3659bfd647bf46d9dba1af696e8"} err="failed to get container status \"30e2502e1e7de0ae313b4f50fd874843f0e4b3659bfd647bf46d9dba1af696e8\": rpc error: code = NotFound desc = could not find container \"30e2502e1e7de0ae313b4f50fd874843f0e4b3659bfd647bf46d9dba1af696e8\": container with ID starting with 30e2502e1e7de0ae313b4f50fd874843f0e4b3659bfd647bf46d9dba1af696e8 not found: ID does not exist" Jan 30 07:47:34 crc kubenswrapper[4520]: I0130 07:47:34.695699 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1a47b13-c4f0-4b55-910f-c47f0fa1ba03" path="/var/lib/kubelet/pods/e1a47b13-c4f0-4b55-910f-c47f0fa1ba03/volumes" Jan 30 07:47:39 crc kubenswrapper[4520]: I0130 07:47:39.638987 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-znw8s" Jan 30 07:47:39 crc kubenswrapper[4520]: I0130 07:47:39.639584 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-znw8s" Jan 30 07:47:39 crc kubenswrapper[4520]: I0130 07:47:39.672506 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-znw8s" Jan 30 07:47:39 crc kubenswrapper[4520]: I0130 07:47:39.869614 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-znw8s" Jan 30 07:47:39 crc kubenswrapper[4520]: I0130 07:47:39.912717 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-znw8s"] Jan 30 07:47:41 crc kubenswrapper[4520]: I0130 07:47:41.849381 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-znw8s" podUID="5bbc5384-fc43-420e-aa4f-4067bae237d9" containerName="registry-server" containerID="cri-o://95d40a5b277e88890743a54e0abe89410f7deb77d85c0e83293db5f80b72db38" gracePeriod=2 Jan 30 07:47:42 crc kubenswrapper[4520]: I0130 07:47:42.254097 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-znw8s" Jan 30 07:47:42 crc kubenswrapper[4520]: I0130 07:47:42.403017 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bbc5384-fc43-420e-aa4f-4067bae237d9-catalog-content\") pod \"5bbc5384-fc43-420e-aa4f-4067bae237d9\" (UID: \"5bbc5384-fc43-420e-aa4f-4067bae237d9\") " Jan 30 07:47:42 crc kubenswrapper[4520]: I0130 07:47:42.403132 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xz49\" (UniqueName: \"kubernetes.io/projected/5bbc5384-fc43-420e-aa4f-4067bae237d9-kube-api-access-8xz49\") pod \"5bbc5384-fc43-420e-aa4f-4067bae237d9\" (UID: \"5bbc5384-fc43-420e-aa4f-4067bae237d9\") " Jan 30 07:47:42 crc kubenswrapper[4520]: I0130 07:47:42.403187 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bbc5384-fc43-420e-aa4f-4067bae237d9-utilities\") pod \"5bbc5384-fc43-420e-aa4f-4067bae237d9\" (UID: \"5bbc5384-fc43-420e-aa4f-4067bae237d9\") " Jan 30 07:47:42 crc kubenswrapper[4520]: I0130 07:47:42.404337 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bbc5384-fc43-420e-aa4f-4067bae237d9-utilities" (OuterVolumeSpecName: "utilities") pod "5bbc5384-fc43-420e-aa4f-4067bae237d9" (UID: "5bbc5384-fc43-420e-aa4f-4067bae237d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:47:42 crc kubenswrapper[4520]: I0130 07:47:42.408137 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bbc5384-fc43-420e-aa4f-4067bae237d9-kube-api-access-8xz49" (OuterVolumeSpecName: "kube-api-access-8xz49") pod "5bbc5384-fc43-420e-aa4f-4067bae237d9" (UID: "5bbc5384-fc43-420e-aa4f-4067bae237d9"). InnerVolumeSpecName "kube-api-access-8xz49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:47:42 crc kubenswrapper[4520]: I0130 07:47:42.424377 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bbc5384-fc43-420e-aa4f-4067bae237d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5bbc5384-fc43-420e-aa4f-4067bae237d9" (UID: "5bbc5384-fc43-420e-aa4f-4067bae237d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:47:42 crc kubenswrapper[4520]: I0130 07:47:42.506191 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bbc5384-fc43-420e-aa4f-4067bae237d9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 07:47:42 crc kubenswrapper[4520]: I0130 07:47:42.506239 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xz49\" (UniqueName: \"kubernetes.io/projected/5bbc5384-fc43-420e-aa4f-4067bae237d9-kube-api-access-8xz49\") on node \"crc\" DevicePath \"\"" Jan 30 07:47:42 crc kubenswrapper[4520]: I0130 07:47:42.506255 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bbc5384-fc43-420e-aa4f-4067bae237d9-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 07:47:42 crc kubenswrapper[4520]: I0130 07:47:42.861938 4520 generic.go:334] "Generic (PLEG): container finished" podID="5bbc5384-fc43-420e-aa4f-4067bae237d9" containerID="95d40a5b277e88890743a54e0abe89410f7deb77d85c0e83293db5f80b72db38" exitCode=0 Jan 30 07:47:42 crc kubenswrapper[4520]: I0130 07:47:42.862018 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-znw8s" Jan 30 07:47:42 crc kubenswrapper[4520]: I0130 07:47:42.862043 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znw8s" event={"ID":"5bbc5384-fc43-420e-aa4f-4067bae237d9","Type":"ContainerDied","Data":"95d40a5b277e88890743a54e0abe89410f7deb77d85c0e83293db5f80b72db38"} Jan 30 07:47:42 crc kubenswrapper[4520]: I0130 07:47:42.862159 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znw8s" event={"ID":"5bbc5384-fc43-420e-aa4f-4067bae237d9","Type":"ContainerDied","Data":"d5782300ca5233657101f0c0952bbb6d7427df3a5b5b33cbc140d69b103d12a1"} Jan 30 07:47:42 crc kubenswrapper[4520]: I0130 07:47:42.862181 4520 scope.go:117] "RemoveContainer" containerID="95d40a5b277e88890743a54e0abe89410f7deb77d85c0e83293db5f80b72db38" Jan 30 07:47:42 crc kubenswrapper[4520]: I0130 07:47:42.889411 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-znw8s"] Jan 30 07:47:42 crc kubenswrapper[4520]: I0130 07:47:42.895325 4520 scope.go:117] "RemoveContainer" containerID="8196c9fe084228b93a2ba23771812409aef78da12396848fd45ecbb8c03a311a" Jan 30 07:47:42 crc kubenswrapper[4520]: I0130 07:47:42.896577 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-znw8s"] Jan 30 07:47:42 crc kubenswrapper[4520]: I0130 07:47:42.919693 4520 scope.go:117] "RemoveContainer" containerID="39a5da9b0bcb52a4312d68d7ab915c61b25ced412724aa0246ff78ef449ea5b4" Jan 30 07:47:42 crc kubenswrapper[4520]: I0130 07:47:42.948728 4520 scope.go:117] "RemoveContainer" containerID="95d40a5b277e88890743a54e0abe89410f7deb77d85c0e83293db5f80b72db38" Jan 30 07:47:42 crc kubenswrapper[4520]: E0130 07:47:42.949218 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95d40a5b277e88890743a54e0abe89410f7deb77d85c0e83293db5f80b72db38\": container with ID starting with 95d40a5b277e88890743a54e0abe89410f7deb77d85c0e83293db5f80b72db38 not found: ID does not exist" containerID="95d40a5b277e88890743a54e0abe89410f7deb77d85c0e83293db5f80b72db38" Jan 30 07:47:42 crc kubenswrapper[4520]: I0130 07:47:42.949251 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95d40a5b277e88890743a54e0abe89410f7deb77d85c0e83293db5f80b72db38"} err="failed to get container status \"95d40a5b277e88890743a54e0abe89410f7deb77d85c0e83293db5f80b72db38\": rpc error: code = NotFound desc = could not find container \"95d40a5b277e88890743a54e0abe89410f7deb77d85c0e83293db5f80b72db38\": container with ID starting with 95d40a5b277e88890743a54e0abe89410f7deb77d85c0e83293db5f80b72db38 not found: ID does not exist" Jan 30 07:47:42 crc kubenswrapper[4520]: I0130 07:47:42.949272 4520 scope.go:117] "RemoveContainer" containerID="8196c9fe084228b93a2ba23771812409aef78da12396848fd45ecbb8c03a311a" Jan 30 07:47:42 crc kubenswrapper[4520]: E0130 07:47:42.949663 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8196c9fe084228b93a2ba23771812409aef78da12396848fd45ecbb8c03a311a\": container with ID starting with 8196c9fe084228b93a2ba23771812409aef78da12396848fd45ecbb8c03a311a not found: ID does not exist" containerID="8196c9fe084228b93a2ba23771812409aef78da12396848fd45ecbb8c03a311a" Jan 30 07:47:42 crc kubenswrapper[4520]: I0130 07:47:42.949700 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8196c9fe084228b93a2ba23771812409aef78da12396848fd45ecbb8c03a311a"} err="failed to get container status \"8196c9fe084228b93a2ba23771812409aef78da12396848fd45ecbb8c03a311a\": rpc error: code = NotFound desc = could not find container \"8196c9fe084228b93a2ba23771812409aef78da12396848fd45ecbb8c03a311a\": container with ID starting with 8196c9fe084228b93a2ba23771812409aef78da12396848fd45ecbb8c03a311a not found: ID does not exist" Jan 30 07:47:42 crc kubenswrapper[4520]: I0130 07:47:42.949732 4520 scope.go:117] "RemoveContainer" containerID="39a5da9b0bcb52a4312d68d7ab915c61b25ced412724aa0246ff78ef449ea5b4" Jan 30 07:47:42 crc kubenswrapper[4520]: E0130 07:47:42.950245 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39a5da9b0bcb52a4312d68d7ab915c61b25ced412724aa0246ff78ef449ea5b4\": container with ID starting with 39a5da9b0bcb52a4312d68d7ab915c61b25ced412724aa0246ff78ef449ea5b4 not found: ID does not exist" containerID="39a5da9b0bcb52a4312d68d7ab915c61b25ced412724aa0246ff78ef449ea5b4" Jan 30 07:47:42 crc kubenswrapper[4520]: I0130 07:47:42.950273 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39a5da9b0bcb52a4312d68d7ab915c61b25ced412724aa0246ff78ef449ea5b4"} err="failed to get container status \"39a5da9b0bcb52a4312d68d7ab915c61b25ced412724aa0246ff78ef449ea5b4\": rpc error: code = NotFound desc = could not find container \"39a5da9b0bcb52a4312d68d7ab915c61b25ced412724aa0246ff78ef449ea5b4\": container with ID starting with 39a5da9b0bcb52a4312d68d7ab915c61b25ced412724aa0246ff78ef449ea5b4 not found: ID does not exist" Jan 30 07:47:44 crc kubenswrapper[4520]: I0130 07:47:44.696662 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bbc5384-fc43-420e-aa4f-4067bae237d9" path="/var/lib/kubelet/pods/5bbc5384-fc43-420e-aa4f-4067bae237d9/volumes" Jan 30 07:47:56 crc kubenswrapper[4520]: E0130 07:47:56.698373 4520 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.25.87:56738->192.168.25.87:39417: write tcp 192.168.25.87:56738->192.168.25.87:39417: write: broken pipe Jan 30 07:47:57 crc kubenswrapper[4520]: I0130 07:47:57.793854 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 07:47:57 crc kubenswrapper[4520]: I0130 07:47:57.794446 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 07:48:27 crc kubenswrapper[4520]: I0130 07:48:27.793329 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 07:48:27 crc kubenswrapper[4520]: I0130 07:48:27.794006 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 07:48:57 crc kubenswrapper[4520]: I0130 07:48:57.794230 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 07:48:57 crc kubenswrapper[4520]: I0130 07:48:57.795985 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 07:48:57 crc kubenswrapper[4520]: I0130 07:48:57.796043 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" Jan 30 07:48:57 crc kubenswrapper[4520]: I0130 07:48:57.796682 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b5e2c9a8b5c47ce871755d1f4913e2df4198ac5cdccac62ebb1db39109e44b6d"} pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 07:48:57 crc kubenswrapper[4520]: I0130 07:48:57.796744 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" containerID="cri-o://b5e2c9a8b5c47ce871755d1f4913e2df4198ac5cdccac62ebb1db39109e44b6d" gracePeriod=600 Jan 30 07:48:57 crc kubenswrapper[4520]: E0130 07:48:57.933141 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:48:58 crc kubenswrapper[4520]: I0130 07:48:58.477682 4520 generic.go:334] "Generic (PLEG): container finished" podID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerID="b5e2c9a8b5c47ce871755d1f4913e2df4198ac5cdccac62ebb1db39109e44b6d" exitCode=0 Jan 30 07:48:58 crc kubenswrapper[4520]: I0130 07:48:58.477765 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerDied","Data":"b5e2c9a8b5c47ce871755d1f4913e2df4198ac5cdccac62ebb1db39109e44b6d"} Jan 30 07:48:58 crc kubenswrapper[4520]: I0130 07:48:58.478109 4520 scope.go:117] "RemoveContainer" containerID="456b8a5b16b91532f2415fc1c7a5797c992b95763d1a3bcd0cba514d2afb94ed" Jan 30 07:48:58 crc kubenswrapper[4520]: I0130 07:48:58.481677 4520 scope.go:117] "RemoveContainer" containerID="b5e2c9a8b5c47ce871755d1f4913e2df4198ac5cdccac62ebb1db39109e44b6d" Jan 30 07:48:58 crc kubenswrapper[4520]: E0130 07:48:58.482168 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:49:06 crc kubenswrapper[4520]: I0130 07:49:06.733430 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x2r2j"] Jan 30 07:49:06 crc kubenswrapper[4520]: E0130 07:49:06.734743 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bbc5384-fc43-420e-aa4f-4067bae237d9" containerName="extract-utilities" Jan 30 07:49:06 crc kubenswrapper[4520]: I0130 07:49:06.734761 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bbc5384-fc43-420e-aa4f-4067bae237d9" containerName="extract-utilities" Jan 30 07:49:06 crc kubenswrapper[4520]: E0130 07:49:06.734796 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a47b13-c4f0-4b55-910f-c47f0fa1ba03" containerName="extract-utilities" Jan 30 07:49:06 crc kubenswrapper[4520]: I0130 07:49:06.734802 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a47b13-c4f0-4b55-910f-c47f0fa1ba03" containerName="extract-utilities" Jan 30 07:49:06 crc kubenswrapper[4520]: E0130 07:49:06.734834 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bbc5384-fc43-420e-aa4f-4067bae237d9" containerName="registry-server" Jan 30 07:49:06 crc kubenswrapper[4520]: I0130 07:49:06.734840 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bbc5384-fc43-420e-aa4f-4067bae237d9" containerName="registry-server" Jan 30 07:49:06 crc kubenswrapper[4520]: E0130 07:49:06.734863 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a47b13-c4f0-4b55-910f-c47f0fa1ba03" containerName="registry-server" Jan 30 07:49:06 crc kubenswrapper[4520]: I0130 07:49:06.734869 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a47b13-c4f0-4b55-910f-c47f0fa1ba03" containerName="registry-server" Jan 30 07:49:06 crc kubenswrapper[4520]: E0130 07:49:06.734888 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bbc5384-fc43-420e-aa4f-4067bae237d9" containerName="extract-content" Jan 30 07:49:06 crc kubenswrapper[4520]: I0130 07:49:06.734894 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bbc5384-fc43-420e-aa4f-4067bae237d9" containerName="extract-content" Jan 30 07:49:06 crc kubenswrapper[4520]: E0130 07:49:06.734912 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a47b13-c4f0-4b55-910f-c47f0fa1ba03" containerName="extract-content" Jan 30 07:49:06 crc kubenswrapper[4520]: I0130 07:49:06.734920 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a47b13-c4f0-4b55-910f-c47f0fa1ba03" containerName="extract-content" Jan 30 07:49:06 crc kubenswrapper[4520]: I0130 07:49:06.735253 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1a47b13-c4f0-4b55-910f-c47f0fa1ba03" containerName="registry-server" Jan 30 07:49:06 crc kubenswrapper[4520]: I0130 07:49:06.735264 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bbc5384-fc43-420e-aa4f-4067bae237d9" containerName="registry-server" Jan 30 07:49:06 crc kubenswrapper[4520]: I0130 07:49:06.741413 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x2r2j" Jan 30 07:49:06 crc kubenswrapper[4520]: I0130 07:49:06.765065 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x2r2j"] Jan 30 07:49:06 crc kubenswrapper[4520]: I0130 07:49:06.817230 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8dd9\" (UniqueName: \"kubernetes.io/projected/345a7df8-47db-4f73-8daf-b45bb7726760-kube-api-access-m8dd9\") pod \"redhat-operators-x2r2j\" (UID: \"345a7df8-47db-4f73-8daf-b45bb7726760\") " pod="openshift-marketplace/redhat-operators-x2r2j" Jan 30 07:49:06 crc kubenswrapper[4520]: I0130 07:49:06.817308 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/345a7df8-47db-4f73-8daf-b45bb7726760-catalog-content\") pod \"redhat-operators-x2r2j\" (UID: \"345a7df8-47db-4f73-8daf-b45bb7726760\") " pod="openshift-marketplace/redhat-operators-x2r2j" Jan 30 07:49:06 crc kubenswrapper[4520]: I0130 07:49:06.817446 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/345a7df8-47db-4f73-8daf-b45bb7726760-utilities\") pod \"redhat-operators-x2r2j\" (UID: \"345a7df8-47db-4f73-8daf-b45bb7726760\") " pod="openshift-marketplace/redhat-operators-x2r2j" Jan 30 07:49:06 crc kubenswrapper[4520]: I0130 07:49:06.919625 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8dd9\" (UniqueName: \"kubernetes.io/projected/345a7df8-47db-4f73-8daf-b45bb7726760-kube-api-access-m8dd9\") pod \"redhat-operators-x2r2j\" (UID: \"345a7df8-47db-4f73-8daf-b45bb7726760\") " pod="openshift-marketplace/redhat-operators-x2r2j" Jan 30 07:49:06 crc kubenswrapper[4520]: I0130 07:49:06.919695 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/345a7df8-47db-4f73-8daf-b45bb7726760-catalog-content\") pod \"redhat-operators-x2r2j\" (UID: \"345a7df8-47db-4f73-8daf-b45bb7726760\") " pod="openshift-marketplace/redhat-operators-x2r2j" Jan 30 07:49:06 crc kubenswrapper[4520]: I0130 07:49:06.919826 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/345a7df8-47db-4f73-8daf-b45bb7726760-utilities\") pod \"redhat-operators-x2r2j\" (UID: \"345a7df8-47db-4f73-8daf-b45bb7726760\") " pod="openshift-marketplace/redhat-operators-x2r2j" Jan 30 07:49:06 crc kubenswrapper[4520]: I0130 07:49:06.920270 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/345a7df8-47db-4f73-8daf-b45bb7726760-utilities\") pod \"redhat-operators-x2r2j\" (UID: \"345a7df8-47db-4f73-8daf-b45bb7726760\") " pod="openshift-marketplace/redhat-operators-x2r2j" Jan 30 07:49:06 crc kubenswrapper[4520]: I0130 07:49:06.920783 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/345a7df8-47db-4f73-8daf-b45bb7726760-catalog-content\") pod \"redhat-operators-x2r2j\" (UID: \"345a7df8-47db-4f73-8daf-b45bb7726760\") " pod="openshift-marketplace/redhat-operators-x2r2j" Jan 30 07:49:06 crc kubenswrapper[4520]: I0130 07:49:06.944227 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8dd9\" (UniqueName: \"kubernetes.io/projected/345a7df8-47db-4f73-8daf-b45bb7726760-kube-api-access-m8dd9\") pod \"redhat-operators-x2r2j\" (UID: \"345a7df8-47db-4f73-8daf-b45bb7726760\") " pod="openshift-marketplace/redhat-operators-x2r2j" Jan 30 07:49:07 crc kubenswrapper[4520]: I0130 07:49:07.061266 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x2r2j" Jan 30 07:49:07 crc kubenswrapper[4520]: I0130 07:49:07.502728 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x2r2j"] Jan 30 07:49:07 crc kubenswrapper[4520]: I0130 07:49:07.552124 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2r2j" event={"ID":"345a7df8-47db-4f73-8daf-b45bb7726760","Type":"ContainerStarted","Data":"f29c3d58aa073da6c108868e52926c3a6376e2f9bcf06c0ca600680027491d72"} Jan 30 07:49:08 crc kubenswrapper[4520]: I0130 07:49:08.561784 4520 generic.go:334] "Generic (PLEG): container finished" podID="345a7df8-47db-4f73-8daf-b45bb7726760" containerID="047be271fdef353fecb2af83056c8fd78f9452c5aa5a1d5f1e1a8b0a73c7931f" exitCode=0 Jan 30 07:49:08 crc kubenswrapper[4520]: I0130 07:49:08.561917 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2r2j" event={"ID":"345a7df8-47db-4f73-8daf-b45bb7726760","Type":"ContainerDied","Data":"047be271fdef353fecb2af83056c8fd78f9452c5aa5a1d5f1e1a8b0a73c7931f"} Jan 30 07:49:09 crc kubenswrapper[4520]: I0130 07:49:09.571091 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2r2j" event={"ID":"345a7df8-47db-4f73-8daf-b45bb7726760","Type":"ContainerStarted","Data":"9c33708f191719f8263bf137b507ac3aaf50c01789649d1b36491825565ecb41"} Jan 30 07:49:12 crc kubenswrapper[4520]: I0130 07:49:12.593959 4520 generic.go:334] "Generic (PLEG): container finished" podID="345a7df8-47db-4f73-8daf-b45bb7726760" containerID="9c33708f191719f8263bf137b507ac3aaf50c01789649d1b36491825565ecb41" exitCode=0 Jan 30 07:49:12 crc kubenswrapper[4520]: I0130 07:49:12.594035 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2r2j" event={"ID":"345a7df8-47db-4f73-8daf-b45bb7726760","Type":"ContainerDied","Data":"9c33708f191719f8263bf137b507ac3aaf50c01789649d1b36491825565ecb41"} Jan 30 07:49:12 crc kubenswrapper[4520]: I0130 07:49:12.685612 4520 scope.go:117] "RemoveContainer" containerID="b5e2c9a8b5c47ce871755d1f4913e2df4198ac5cdccac62ebb1db39109e44b6d" Jan 30 07:49:12 crc kubenswrapper[4520]: E0130 07:49:12.686148 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:49:13 crc kubenswrapper[4520]: I0130 07:49:13.607401 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2r2j" event={"ID":"345a7df8-47db-4f73-8daf-b45bb7726760","Type":"ContainerStarted","Data":"9eadf3522940d115a4eec94a63c9b6b9646b7e683341cac7b5962a897fe73be1"} Jan 30 07:49:13 crc kubenswrapper[4520]: I0130 07:49:13.637469 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x2r2j" podStartSLOduration=3.122983527 podStartE2EDuration="7.637453737s" podCreationTimestamp="2026-01-30 07:49:06 +0000 UTC" firstStartedPulling="2026-01-30 07:49:08.563710437 +0000 UTC m=+3862.192062617" lastFinishedPulling="2026-01-30 07:49:13.078180646 +0000 UTC m=+3866.706532827" observedRunningTime="2026-01-30 07:49:13.625544585 +0000 UTC m=+3867.253896766" watchObservedRunningTime="2026-01-30 07:49:13.637453737 +0000 UTC m=+3867.265805918" Jan 30 07:49:17 crc kubenswrapper[4520]: I0130 07:49:17.061820 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x2r2j" Jan 30 07:49:17 crc kubenswrapper[4520]: I0130 07:49:17.062379 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x2r2j" Jan 30 07:49:18 crc kubenswrapper[4520]: I0130 07:49:18.097115 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x2r2j" podUID="345a7df8-47db-4f73-8daf-b45bb7726760" containerName="registry-server" probeResult="failure" output=< Jan 30 07:49:18 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 07:49:18 crc kubenswrapper[4520]: > Jan 30 07:49:27 crc kubenswrapper[4520]: I0130 07:49:27.099066 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x2r2j" Jan 30 07:49:27 crc kubenswrapper[4520]: I0130 07:49:27.136954 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x2r2j" Jan 30 07:49:27 crc kubenswrapper[4520]: I0130 07:49:27.334268 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x2r2j"] Jan 30 07:49:27 crc kubenswrapper[4520]: I0130 07:49:27.685697 4520 scope.go:117] "RemoveContainer" containerID="b5e2c9a8b5c47ce871755d1f4913e2df4198ac5cdccac62ebb1db39109e44b6d" Jan 30 07:49:27 crc kubenswrapper[4520]: E0130 07:49:27.686196 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:49:28 crc kubenswrapper[4520]: I0130 07:49:28.727478 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x2r2j" podUID="345a7df8-47db-4f73-8daf-b45bb7726760" containerName="registry-server" containerID="cri-o://9eadf3522940d115a4eec94a63c9b6b9646b7e683341cac7b5962a897fe73be1" gracePeriod=2 Jan 30 07:49:29 crc kubenswrapper[4520]: I0130 07:49:29.245611 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x2r2j" Jan 30 07:49:29 crc kubenswrapper[4520]: I0130 07:49:29.263528 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/345a7df8-47db-4f73-8daf-b45bb7726760-catalog-content\") pod \"345a7df8-47db-4f73-8daf-b45bb7726760\" (UID: \"345a7df8-47db-4f73-8daf-b45bb7726760\") " Jan 30 07:49:29 crc kubenswrapper[4520]: I0130 07:49:29.263601 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/345a7df8-47db-4f73-8daf-b45bb7726760-utilities\") pod \"345a7df8-47db-4f73-8daf-b45bb7726760\" (UID: \"345a7df8-47db-4f73-8daf-b45bb7726760\") " Jan 30 07:49:29 crc kubenswrapper[4520]: I0130 07:49:29.263721 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8dd9\" (UniqueName: \"kubernetes.io/projected/345a7df8-47db-4f73-8daf-b45bb7726760-kube-api-access-m8dd9\") pod \"345a7df8-47db-4f73-8daf-b45bb7726760\" (UID: \"345a7df8-47db-4f73-8daf-b45bb7726760\") " Jan 30 07:49:29 crc kubenswrapper[4520]: I0130 07:49:29.264766 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/345a7df8-47db-4f73-8daf-b45bb7726760-utilities" (OuterVolumeSpecName: "utilities") pod "345a7df8-47db-4f73-8daf-b45bb7726760" (UID: "345a7df8-47db-4f73-8daf-b45bb7726760"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:49:29 crc kubenswrapper[4520]: I0130 07:49:29.272063 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/345a7df8-47db-4f73-8daf-b45bb7726760-kube-api-access-m8dd9" (OuterVolumeSpecName: "kube-api-access-m8dd9") pod "345a7df8-47db-4f73-8daf-b45bb7726760" (UID: "345a7df8-47db-4f73-8daf-b45bb7726760"). InnerVolumeSpecName "kube-api-access-m8dd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:49:29 crc kubenswrapper[4520]: I0130 07:49:29.367250 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/345a7df8-47db-4f73-8daf-b45bb7726760-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 07:49:29 crc kubenswrapper[4520]: I0130 07:49:29.367349 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8dd9\" (UniqueName: \"kubernetes.io/projected/345a7df8-47db-4f73-8daf-b45bb7726760-kube-api-access-m8dd9\") on node \"crc\" DevicePath \"\"" Jan 30 07:49:29 crc kubenswrapper[4520]: I0130 07:49:29.380776 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/345a7df8-47db-4f73-8daf-b45bb7726760-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "345a7df8-47db-4f73-8daf-b45bb7726760" (UID: "345a7df8-47db-4f73-8daf-b45bb7726760"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:49:29 crc kubenswrapper[4520]: I0130 07:49:29.468891 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/345a7df8-47db-4f73-8daf-b45bb7726760-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 07:49:29 crc kubenswrapper[4520]: I0130 07:49:29.737131 4520 generic.go:334] "Generic (PLEG): container finished" podID="345a7df8-47db-4f73-8daf-b45bb7726760" containerID="9eadf3522940d115a4eec94a63c9b6b9646b7e683341cac7b5962a897fe73be1" exitCode=0 Jan 30 07:49:29 crc kubenswrapper[4520]: I0130 07:49:29.737209 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2r2j" event={"ID":"345a7df8-47db-4f73-8daf-b45bb7726760","Type":"ContainerDied","Data":"9eadf3522940d115a4eec94a63c9b6b9646b7e683341cac7b5962a897fe73be1"} Jan 30 07:49:29 crc kubenswrapper[4520]: I0130 07:49:29.737222 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x2r2j" Jan 30 07:49:29 crc kubenswrapper[4520]: I0130 07:49:29.738046 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2r2j" event={"ID":"345a7df8-47db-4f73-8daf-b45bb7726760","Type":"ContainerDied","Data":"f29c3d58aa073da6c108868e52926c3a6376e2f9bcf06c0ca600680027491d72"} Jan 30 07:49:29 crc kubenswrapper[4520]: I0130 07:49:29.738084 4520 scope.go:117] "RemoveContainer" containerID="9eadf3522940d115a4eec94a63c9b6b9646b7e683341cac7b5962a897fe73be1" Jan 30 07:49:29 crc kubenswrapper[4520]: I0130 07:49:29.767693 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x2r2j"] Jan 30 07:49:29 crc kubenswrapper[4520]: I0130 07:49:29.768345 4520 scope.go:117] "RemoveContainer" containerID="9c33708f191719f8263bf137b507ac3aaf50c01789649d1b36491825565ecb41" Jan 30 07:49:29 crc kubenswrapper[4520]: I0130 07:49:29.779216 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x2r2j"] Jan 30 07:49:29 crc kubenswrapper[4520]: I0130 07:49:29.786397 4520 scope.go:117] "RemoveContainer" containerID="047be271fdef353fecb2af83056c8fd78f9452c5aa5a1d5f1e1a8b0a73c7931f" Jan 30 07:49:29 crc kubenswrapper[4520]: I0130 07:49:29.823321 4520 scope.go:117] "RemoveContainer" containerID="9eadf3522940d115a4eec94a63c9b6b9646b7e683341cac7b5962a897fe73be1" Jan 30 07:49:29 crc kubenswrapper[4520]: E0130 07:49:29.823728 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9eadf3522940d115a4eec94a63c9b6b9646b7e683341cac7b5962a897fe73be1\": container with ID starting with 9eadf3522940d115a4eec94a63c9b6b9646b7e683341cac7b5962a897fe73be1 not found: ID does not exist" containerID="9eadf3522940d115a4eec94a63c9b6b9646b7e683341cac7b5962a897fe73be1" Jan 30 07:49:29 crc kubenswrapper[4520]: I0130 07:49:29.823765 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eadf3522940d115a4eec94a63c9b6b9646b7e683341cac7b5962a897fe73be1"} err="failed to get container status \"9eadf3522940d115a4eec94a63c9b6b9646b7e683341cac7b5962a897fe73be1\": rpc error: code = NotFound desc = could not find container \"9eadf3522940d115a4eec94a63c9b6b9646b7e683341cac7b5962a897fe73be1\": container with ID starting with 9eadf3522940d115a4eec94a63c9b6b9646b7e683341cac7b5962a897fe73be1 not found: ID does not exist" Jan 30 07:49:29 crc kubenswrapper[4520]: I0130 07:49:29.823789 4520 scope.go:117] "RemoveContainer" containerID="9c33708f191719f8263bf137b507ac3aaf50c01789649d1b36491825565ecb41" Jan 30 07:49:29 crc kubenswrapper[4520]: E0130 07:49:29.824020 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c33708f191719f8263bf137b507ac3aaf50c01789649d1b36491825565ecb41\": container with ID starting with 9c33708f191719f8263bf137b507ac3aaf50c01789649d1b36491825565ecb41 not found: ID does not exist" containerID="9c33708f191719f8263bf137b507ac3aaf50c01789649d1b36491825565ecb41" Jan 30 07:49:29 crc kubenswrapper[4520]: I0130 07:49:29.824040 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c33708f191719f8263bf137b507ac3aaf50c01789649d1b36491825565ecb41"} err="failed to get container status \"9c33708f191719f8263bf137b507ac3aaf50c01789649d1b36491825565ecb41\": rpc error: code = NotFound desc = could not find container \"9c33708f191719f8263bf137b507ac3aaf50c01789649d1b36491825565ecb41\": container with ID starting with 9c33708f191719f8263bf137b507ac3aaf50c01789649d1b36491825565ecb41 not found: ID does not exist" Jan 30 07:49:29 crc kubenswrapper[4520]: I0130 07:49:29.824052 4520 scope.go:117] "RemoveContainer" containerID="047be271fdef353fecb2af83056c8fd78f9452c5aa5a1d5f1e1a8b0a73c7931f" Jan 30 07:49:29 crc kubenswrapper[4520]: E0130 07:49:29.824250 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"047be271fdef353fecb2af83056c8fd78f9452c5aa5a1d5f1e1a8b0a73c7931f\": container with ID starting with 047be271fdef353fecb2af83056c8fd78f9452c5aa5a1d5f1e1a8b0a73c7931f not found: ID does not exist" containerID="047be271fdef353fecb2af83056c8fd78f9452c5aa5a1d5f1e1a8b0a73c7931f" Jan 30 07:49:29 crc kubenswrapper[4520]: I0130 07:49:29.824275 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"047be271fdef353fecb2af83056c8fd78f9452c5aa5a1d5f1e1a8b0a73c7931f"} err="failed to get container status \"047be271fdef353fecb2af83056c8fd78f9452c5aa5a1d5f1e1a8b0a73c7931f\": rpc error: code = NotFound desc = could not find container \"047be271fdef353fecb2af83056c8fd78f9452c5aa5a1d5f1e1a8b0a73c7931f\": container with ID starting with 047be271fdef353fecb2af83056c8fd78f9452c5aa5a1d5f1e1a8b0a73c7931f not found: ID does not exist" Jan 30 07:49:30 crc kubenswrapper[4520]: I0130 07:49:30.694609 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="345a7df8-47db-4f73-8daf-b45bb7726760" path="/var/lib/kubelet/pods/345a7df8-47db-4f73-8daf-b45bb7726760/volumes" Jan 30 07:49:38 crc kubenswrapper[4520]: I0130 07:49:38.687027 4520 scope.go:117] "RemoveContainer" containerID="b5e2c9a8b5c47ce871755d1f4913e2df4198ac5cdccac62ebb1db39109e44b6d" Jan 30 07:49:38 crc kubenswrapper[4520]: E0130 07:49:38.687928 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:49:53 crc kubenswrapper[4520]: I0130 07:49:53.686549 4520 scope.go:117] "RemoveContainer" containerID="b5e2c9a8b5c47ce871755d1f4913e2df4198ac5cdccac62ebb1db39109e44b6d" Jan 30 07:49:53 crc kubenswrapper[4520]: E0130 07:49:53.687369 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:50:06 crc kubenswrapper[4520]: I0130 07:50:06.691662 4520 scope.go:117] "RemoveContainer" containerID="b5e2c9a8b5c47ce871755d1f4913e2df4198ac5cdccac62ebb1db39109e44b6d" Jan 30 07:50:06 crc kubenswrapper[4520]: E0130 07:50:06.692538 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:50:18 crc kubenswrapper[4520]: I0130 07:50:18.686201 4520 scope.go:117] "RemoveContainer" containerID="b5e2c9a8b5c47ce871755d1f4913e2df4198ac5cdccac62ebb1db39109e44b6d" Jan 30 07:50:18 crc kubenswrapper[4520]: E0130 07:50:18.686814 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:50:33 crc kubenswrapper[4520]: I0130 07:50:33.685510 4520 scope.go:117] "RemoveContainer" containerID="b5e2c9a8b5c47ce871755d1f4913e2df4198ac5cdccac62ebb1db39109e44b6d" Jan 30 07:50:33 crc kubenswrapper[4520]: E0130 07:50:33.686404 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:50:44 crc kubenswrapper[4520]: I0130 07:50:44.687631 4520 scope.go:117] "RemoveContainer" containerID="b5e2c9a8b5c47ce871755d1f4913e2df4198ac5cdccac62ebb1db39109e44b6d" Jan 30 07:50:44 crc kubenswrapper[4520]: E0130 07:50:44.688425 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:50:55 crc kubenswrapper[4520]: I0130 07:50:55.685633 4520 scope.go:117] "RemoveContainer" containerID="b5e2c9a8b5c47ce871755d1f4913e2df4198ac5cdccac62ebb1db39109e44b6d" Jan 30 07:50:55 crc kubenswrapper[4520]: E0130 07:50:55.686417 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:51:06 crc kubenswrapper[4520]: I0130 07:51:06.696883 4520 scope.go:117] "RemoveContainer" containerID="b5e2c9a8b5c47ce871755d1f4913e2df4198ac5cdccac62ebb1db39109e44b6d" Jan 30 07:51:06 crc kubenswrapper[4520]: E0130 07:51:06.697535 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:51:20 crc kubenswrapper[4520]: I0130 07:51:20.685868 4520 scope.go:117] "RemoveContainer" containerID="b5e2c9a8b5c47ce871755d1f4913e2df4198ac5cdccac62ebb1db39109e44b6d" Jan 30 07:51:20 crc kubenswrapper[4520]: E0130 07:51:20.686383 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:51:31 crc kubenswrapper[4520]: I0130 07:51:31.685098 4520 scope.go:117] "RemoveContainer" containerID="b5e2c9a8b5c47ce871755d1f4913e2df4198ac5cdccac62ebb1db39109e44b6d" Jan 30 07:51:31 crc kubenswrapper[4520]: E0130 07:51:31.685762 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:51:44 crc kubenswrapper[4520]: I0130 07:51:44.685870 4520 scope.go:117] "RemoveContainer" containerID="b5e2c9a8b5c47ce871755d1f4913e2df4198ac5cdccac62ebb1db39109e44b6d" Jan 30 07:51:44 crc kubenswrapper[4520]: E0130 07:51:44.686808 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:51:56 crc kubenswrapper[4520]: I0130 07:51:56.691675 4520 scope.go:117] "RemoveContainer" containerID="b5e2c9a8b5c47ce871755d1f4913e2df4198ac5cdccac62ebb1db39109e44b6d" Jan 30 07:51:56 crc kubenswrapper[4520]: E0130 07:51:56.693027 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:52:10 crc kubenswrapper[4520]: I0130 07:52:10.686637 4520 scope.go:117] "RemoveContainer" containerID="b5e2c9a8b5c47ce871755d1f4913e2df4198ac5cdccac62ebb1db39109e44b6d" Jan 30 07:52:10 crc kubenswrapper[4520]: E0130 07:52:10.687981 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:52:24 crc kubenswrapper[4520]: I0130 07:52:24.685461 4520 scope.go:117] "RemoveContainer" containerID="b5e2c9a8b5c47ce871755d1f4913e2df4198ac5cdccac62ebb1db39109e44b6d" Jan 30 07:52:24 crc kubenswrapper[4520]: E0130 07:52:24.686335 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:52:35 crc kubenswrapper[4520]: I0130 07:52:35.686494 4520 scope.go:117] "RemoveContainer" containerID="b5e2c9a8b5c47ce871755d1f4913e2df4198ac5cdccac62ebb1db39109e44b6d" Jan 30 07:52:35 crc kubenswrapper[4520]: E0130 07:52:35.688992 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:52:46 crc kubenswrapper[4520]: I0130 07:52:46.690534 4520 scope.go:117] "RemoveContainer" containerID="b5e2c9a8b5c47ce871755d1f4913e2df4198ac5cdccac62ebb1db39109e44b6d" Jan 30 07:52:46 crc kubenswrapper[4520]: E0130 07:52:46.691253 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:53:01 crc kubenswrapper[4520]: I0130 07:53:01.686199 4520 scope.go:117] "RemoveContainer" containerID="b5e2c9a8b5c47ce871755d1f4913e2df4198ac5cdccac62ebb1db39109e44b6d" Jan 30 07:53:01 crc kubenswrapper[4520]: E0130 07:53:01.688358 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:53:12 crc kubenswrapper[4520]: I0130 07:53:12.686070 4520 scope.go:117] "RemoveContainer" containerID="b5e2c9a8b5c47ce871755d1f4913e2df4198ac5cdccac62ebb1db39109e44b6d" Jan 30 07:53:12 crc kubenswrapper[4520]: E0130 07:53:12.686778 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:53:24 crc kubenswrapper[4520]: I0130 07:53:24.686848 4520 scope.go:117] "RemoveContainer" containerID="b5e2c9a8b5c47ce871755d1f4913e2df4198ac5cdccac62ebb1db39109e44b6d" Jan 30 07:53:24 crc kubenswrapper[4520]: E0130 07:53:24.688169 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:53:38 crc kubenswrapper[4520]: I0130 07:53:38.685213 4520 scope.go:117] "RemoveContainer" containerID="b5e2c9a8b5c47ce871755d1f4913e2df4198ac5cdccac62ebb1db39109e44b6d" Jan 30 07:53:38 crc kubenswrapper[4520]: E0130 07:53:38.686049 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:53:49 crc kubenswrapper[4520]: I0130 07:53:49.685652 4520 scope.go:117] "RemoveContainer" containerID="b5e2c9a8b5c47ce871755d1f4913e2df4198ac5cdccac62ebb1db39109e44b6d" Jan 30 07:53:49 crc kubenswrapper[4520]: E0130 07:53:49.686262 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 07:54:03 crc kubenswrapper[4520]: I0130 07:54:03.685735 4520 scope.go:117] "RemoveContainer" containerID="b5e2c9a8b5c47ce871755d1f4913e2df4198ac5cdccac62ebb1db39109e44b6d" Jan 30 07:54:03 crc kubenswrapper[4520]: I0130 07:54:03.885552 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerStarted","Data":"77cd5b87e2a7d3d7353f16141d94f2ae98b277110007125a51abfb7eb1a1a076"} Jan 30 07:54:52 crc kubenswrapper[4520]: E0130 07:54:52.715801 4520 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.25.87:43574->192.168.25.87:39417: write tcp 192.168.25.87:43574->192.168.25.87:39417: write: broken pipe Jan 30 07:55:45 crc kubenswrapper[4520]: E0130 07:55:45.137723 4520 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.25.87:33614->192.168.25.87:39417: write tcp 192.168.25.87:33614->192.168.25.87:39417: write: broken pipe Jan 30 07:56:27 crc kubenswrapper[4520]: I0130 07:56:27.793225 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 07:56:27 crc kubenswrapper[4520]: I0130 07:56:27.793612 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 07:56:57 crc kubenswrapper[4520]: I0130 07:56:57.793877 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 07:56:57 crc kubenswrapper[4520]: I0130 07:56:57.794268 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 07:57:08 crc kubenswrapper[4520]: I0130 07:57:08.300368 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bzmdj"] Jan 30 07:57:08 crc kubenswrapper[4520]: E0130 07:57:08.310734 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="345a7df8-47db-4f73-8daf-b45bb7726760" containerName="extract-utilities" Jan 30 07:57:08 crc kubenswrapper[4520]: I0130 07:57:08.310839 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="345a7df8-47db-4f73-8daf-b45bb7726760" containerName="extract-utilities" Jan 30 07:57:08 crc kubenswrapper[4520]: E0130 07:57:08.310912 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="345a7df8-47db-4f73-8daf-b45bb7726760" containerName="registry-server" Jan 30 07:57:08 crc kubenswrapper[4520]: I0130 07:57:08.310969 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="345a7df8-47db-4f73-8daf-b45bb7726760" containerName="registry-server" Jan 30 07:57:08 crc kubenswrapper[4520]: E0130 07:57:08.311029 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="345a7df8-47db-4f73-8daf-b45bb7726760" containerName="extract-content" Jan 30 07:57:08 crc kubenswrapper[4520]: I0130 07:57:08.311084 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="345a7df8-47db-4f73-8daf-b45bb7726760" containerName="extract-content" Jan 30 07:57:08 crc kubenswrapper[4520]: I0130 07:57:08.311325 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="345a7df8-47db-4f73-8daf-b45bb7726760" containerName="registry-server" Jan 30 07:57:08 crc kubenswrapper[4520]: I0130 07:57:08.313179 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bzmdj" Jan 30 07:57:08 crc kubenswrapper[4520]: I0130 07:57:08.325919 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bzmdj"] Jan 30 07:57:08 crc kubenswrapper[4520]: I0130 07:57:08.394582 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae981d0-8076-40c8-8c37-6effc13d65ec-catalog-content\") pod \"certified-operators-bzmdj\" (UID: \"9ae981d0-8076-40c8-8c37-6effc13d65ec\") " pod="openshift-marketplace/certified-operators-bzmdj" Jan 30 07:57:08 crc kubenswrapper[4520]: I0130 07:57:08.394632 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae981d0-8076-40c8-8c37-6effc13d65ec-utilities\") pod \"certified-operators-bzmdj\" (UID: \"9ae981d0-8076-40c8-8c37-6effc13d65ec\") " pod="openshift-marketplace/certified-operators-bzmdj" Jan 30 07:57:08 crc kubenswrapper[4520]: I0130 07:57:08.394735 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqmw4\" (UniqueName: \"kubernetes.io/projected/9ae981d0-8076-40c8-8c37-6effc13d65ec-kube-api-access-wqmw4\") pod \"certified-operators-bzmdj\" (UID: \"9ae981d0-8076-40c8-8c37-6effc13d65ec\") " pod="openshift-marketplace/certified-operators-bzmdj" Jan 30 07:57:08 crc kubenswrapper[4520]: I0130 07:57:08.496545 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae981d0-8076-40c8-8c37-6effc13d65ec-catalog-content\") pod \"certified-operators-bzmdj\" (UID: \"9ae981d0-8076-40c8-8c37-6effc13d65ec\") " pod="openshift-marketplace/certified-operators-bzmdj" Jan 30 07:57:08 crc kubenswrapper[4520]: I0130 07:57:08.496584 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae981d0-8076-40c8-8c37-6effc13d65ec-utilities\") pod \"certified-operators-bzmdj\" (UID: \"9ae981d0-8076-40c8-8c37-6effc13d65ec\") " pod="openshift-marketplace/certified-operators-bzmdj" Jan 30 07:57:08 crc kubenswrapper[4520]: I0130 07:57:08.496648 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqmw4\" (UniqueName: \"kubernetes.io/projected/9ae981d0-8076-40c8-8c37-6effc13d65ec-kube-api-access-wqmw4\") pod \"certified-operators-bzmdj\" (UID: \"9ae981d0-8076-40c8-8c37-6effc13d65ec\") " pod="openshift-marketplace/certified-operators-bzmdj" Jan 30 07:57:08 crc kubenswrapper[4520]: I0130 07:57:08.498096 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae981d0-8076-40c8-8c37-6effc13d65ec-catalog-content\") pod \"certified-operators-bzmdj\" (UID: \"9ae981d0-8076-40c8-8c37-6effc13d65ec\") " pod="openshift-marketplace/certified-operators-bzmdj" Jan 30 07:57:08 crc kubenswrapper[4520]: I0130 07:57:08.498146 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae981d0-8076-40c8-8c37-6effc13d65ec-utilities\") pod \"certified-operators-bzmdj\" (UID: \"9ae981d0-8076-40c8-8c37-6effc13d65ec\") " pod="openshift-marketplace/certified-operators-bzmdj" Jan 30 07:57:08 crc kubenswrapper[4520]: I0130 07:57:08.516643 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqmw4\" (UniqueName: \"kubernetes.io/projected/9ae981d0-8076-40c8-8c37-6effc13d65ec-kube-api-access-wqmw4\") pod \"certified-operators-bzmdj\" (UID: \"9ae981d0-8076-40c8-8c37-6effc13d65ec\") " pod="openshift-marketplace/certified-operators-bzmdj" Jan 30 07:57:08 crc kubenswrapper[4520]: I0130 07:57:08.633528 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bzmdj" Jan 30 07:57:09 crc kubenswrapper[4520]: I0130 07:57:09.527147 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bzmdj"] Jan 30 07:57:10 crc kubenswrapper[4520]: I0130 07:57:10.293143 4520 generic.go:334] "Generic (PLEG): container finished" podID="9ae981d0-8076-40c8-8c37-6effc13d65ec" containerID="55b56db2adb6c9c697726b7b8cb072f4d3627ca37d0b6d381f450c431dc5f231" exitCode=0 Jan 30 07:57:10 crc kubenswrapper[4520]: I0130 07:57:10.293252 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bzmdj" event={"ID":"9ae981d0-8076-40c8-8c37-6effc13d65ec","Type":"ContainerDied","Data":"55b56db2adb6c9c697726b7b8cb072f4d3627ca37d0b6d381f450c431dc5f231"} Jan 30 07:57:10 crc kubenswrapper[4520]: I0130 07:57:10.293780 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bzmdj" event={"ID":"9ae981d0-8076-40c8-8c37-6effc13d65ec","Type":"ContainerStarted","Data":"10378e7688e84f732399b922256f0197d7ff6cb534acf10ce73ae3bdcaceb3fa"} Jan 30 07:57:10 crc kubenswrapper[4520]: I0130 07:57:10.298866 4520 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 07:57:12 crc kubenswrapper[4520]: I0130 07:57:12.314487 4520 generic.go:334] "Generic (PLEG): container finished" podID="9ae981d0-8076-40c8-8c37-6effc13d65ec" containerID="1bb6666b16c51139e0a18e992263592019fac2b5c1103cfb6994610233e0e232" exitCode=0 Jan 30 07:57:12 crc kubenswrapper[4520]: I0130 07:57:12.314550 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bzmdj" event={"ID":"9ae981d0-8076-40c8-8c37-6effc13d65ec","Type":"ContainerDied","Data":"1bb6666b16c51139e0a18e992263592019fac2b5c1103cfb6994610233e0e232"} Jan 30 07:57:13 crc kubenswrapper[4520]: I0130 07:57:13.328414 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bzmdj" event={"ID":"9ae981d0-8076-40c8-8c37-6effc13d65ec","Type":"ContainerStarted","Data":"e117593f406b0170aeccf031803215dc9c2fb5aa3799f7c5aba572e2f62a5630"} Jan 30 07:57:13 crc kubenswrapper[4520]: I0130 07:57:13.353786 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bzmdj" podStartSLOduration=2.831308314 podStartE2EDuration="5.352871951s" podCreationTimestamp="2026-01-30 07:57:08 +0000 UTC" firstStartedPulling="2026-01-30 07:57:10.294929455 +0000 UTC m=+4343.923281637" lastFinishedPulling="2026-01-30 07:57:12.816493092 +0000 UTC m=+4346.444845274" observedRunningTime="2026-01-30 07:57:13.346362273 +0000 UTC m=+4346.974714454" watchObservedRunningTime="2026-01-30 07:57:13.352871951 +0000 UTC m=+4346.981224132" Jan 30 07:57:18 crc kubenswrapper[4520]: I0130 07:57:18.633830 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bzmdj" Jan 30 07:57:18 crc kubenswrapper[4520]: I0130 07:57:18.634334 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bzmdj" Jan 30 07:57:18 crc kubenswrapper[4520]: I0130 07:57:18.681030 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bzmdj" Jan 30 07:57:19 crc kubenswrapper[4520]: I0130 07:57:19.418152 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bzmdj" Jan 30 07:57:19 crc kubenswrapper[4520]: I0130 07:57:19.462802 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bzmdj"] Jan 30 07:57:21 crc kubenswrapper[4520]: I0130 07:57:21.395962 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bzmdj" podUID="9ae981d0-8076-40c8-8c37-6effc13d65ec" containerName="registry-server" containerID="cri-o://e117593f406b0170aeccf031803215dc9c2fb5aa3799f7c5aba572e2f62a5630" gracePeriod=2 Jan 30 07:57:21 crc kubenswrapper[4520]: I0130 07:57:21.870496 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bzmdj" Jan 30 07:57:21 crc kubenswrapper[4520]: I0130 07:57:21.901525 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae981d0-8076-40c8-8c37-6effc13d65ec-catalog-content\") pod \"9ae981d0-8076-40c8-8c37-6effc13d65ec\" (UID: \"9ae981d0-8076-40c8-8c37-6effc13d65ec\") " Jan 30 07:57:21 crc kubenswrapper[4520]: I0130 07:57:21.901607 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae981d0-8076-40c8-8c37-6effc13d65ec-utilities\") pod \"9ae981d0-8076-40c8-8c37-6effc13d65ec\" (UID: \"9ae981d0-8076-40c8-8c37-6effc13d65ec\") " Jan 30 07:57:21 crc kubenswrapper[4520]: I0130 07:57:21.901645 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqmw4\" (UniqueName: \"kubernetes.io/projected/9ae981d0-8076-40c8-8c37-6effc13d65ec-kube-api-access-wqmw4\") pod \"9ae981d0-8076-40c8-8c37-6effc13d65ec\" (UID: \"9ae981d0-8076-40c8-8c37-6effc13d65ec\") " Jan 30 07:57:21 crc kubenswrapper[4520]: I0130 07:57:21.910189 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ae981d0-8076-40c8-8c37-6effc13d65ec-utilities" (OuterVolumeSpecName: "utilities") pod "9ae981d0-8076-40c8-8c37-6effc13d65ec" (UID: "9ae981d0-8076-40c8-8c37-6effc13d65ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:57:21 crc kubenswrapper[4520]: I0130 07:57:21.916686 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ae981d0-8076-40c8-8c37-6effc13d65ec-kube-api-access-wqmw4" (OuterVolumeSpecName: "kube-api-access-wqmw4") pod "9ae981d0-8076-40c8-8c37-6effc13d65ec" (UID: "9ae981d0-8076-40c8-8c37-6effc13d65ec"). InnerVolumeSpecName "kube-api-access-wqmw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:57:21 crc kubenswrapper[4520]: I0130 07:57:21.938799 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ae981d0-8076-40c8-8c37-6effc13d65ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ae981d0-8076-40c8-8c37-6effc13d65ec" (UID: "9ae981d0-8076-40c8-8c37-6effc13d65ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:57:22 crc kubenswrapper[4520]: I0130 07:57:22.005358 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae981d0-8076-40c8-8c37-6effc13d65ec-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 07:57:22 crc kubenswrapper[4520]: I0130 07:57:22.005405 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae981d0-8076-40c8-8c37-6effc13d65ec-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 07:57:22 crc kubenswrapper[4520]: I0130 07:57:22.005419 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqmw4\" (UniqueName: \"kubernetes.io/projected/9ae981d0-8076-40c8-8c37-6effc13d65ec-kube-api-access-wqmw4\") on node \"crc\" DevicePath \"\"" Jan 30 07:57:22 crc kubenswrapper[4520]: I0130 07:57:22.405482 4520 generic.go:334] "Generic (PLEG): container finished" podID="9ae981d0-8076-40c8-8c37-6effc13d65ec" containerID="e117593f406b0170aeccf031803215dc9c2fb5aa3799f7c5aba572e2f62a5630" exitCode=0 Jan 30 07:57:22 crc kubenswrapper[4520]: I0130 07:57:22.405578 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bzmdj" Jan 30 07:57:22 crc kubenswrapper[4520]: I0130 07:57:22.405586 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bzmdj" event={"ID":"9ae981d0-8076-40c8-8c37-6effc13d65ec","Type":"ContainerDied","Data":"e117593f406b0170aeccf031803215dc9c2fb5aa3799f7c5aba572e2f62a5630"} Jan 30 07:57:22 crc kubenswrapper[4520]: I0130 07:57:22.405886 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bzmdj" event={"ID":"9ae981d0-8076-40c8-8c37-6effc13d65ec","Type":"ContainerDied","Data":"10378e7688e84f732399b922256f0197d7ff6cb534acf10ce73ae3bdcaceb3fa"} Jan 30 07:57:22 crc kubenswrapper[4520]: I0130 07:57:22.405907 4520 scope.go:117] "RemoveContainer" containerID="e117593f406b0170aeccf031803215dc9c2fb5aa3799f7c5aba572e2f62a5630" Jan 30 07:57:22 crc kubenswrapper[4520]: I0130 07:57:22.427452 4520 scope.go:117] "RemoveContainer" containerID="1bb6666b16c51139e0a18e992263592019fac2b5c1103cfb6994610233e0e232" Jan 30 07:57:22 crc kubenswrapper[4520]: I0130 07:57:22.438811 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bzmdj"] Jan 30 07:57:22 crc kubenswrapper[4520]: I0130 07:57:22.445643 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bzmdj"] Jan 30 07:57:22 crc kubenswrapper[4520]: I0130 07:57:22.459783 4520 scope.go:117] "RemoveContainer" containerID="55b56db2adb6c9c697726b7b8cb072f4d3627ca37d0b6d381f450c431dc5f231" Jan 30 07:57:22 crc kubenswrapper[4520]: I0130 07:57:22.480659 4520 scope.go:117] "RemoveContainer" containerID="e117593f406b0170aeccf031803215dc9c2fb5aa3799f7c5aba572e2f62a5630" Jan 30 07:57:22 crc kubenswrapper[4520]: E0130 07:57:22.481386 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e117593f406b0170aeccf031803215dc9c2fb5aa3799f7c5aba572e2f62a5630\": container with ID starting with e117593f406b0170aeccf031803215dc9c2fb5aa3799f7c5aba572e2f62a5630 not found: ID does not exist" containerID="e117593f406b0170aeccf031803215dc9c2fb5aa3799f7c5aba572e2f62a5630" Jan 30 07:57:22 crc kubenswrapper[4520]: I0130 07:57:22.481419 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e117593f406b0170aeccf031803215dc9c2fb5aa3799f7c5aba572e2f62a5630"} err="failed to get container status \"e117593f406b0170aeccf031803215dc9c2fb5aa3799f7c5aba572e2f62a5630\": rpc error: code = NotFound desc = could not find container \"e117593f406b0170aeccf031803215dc9c2fb5aa3799f7c5aba572e2f62a5630\": container with ID starting with e117593f406b0170aeccf031803215dc9c2fb5aa3799f7c5aba572e2f62a5630 not found: ID does not exist" Jan 30 07:57:22 crc kubenswrapper[4520]: I0130 07:57:22.481440 4520 scope.go:117] "RemoveContainer" containerID="1bb6666b16c51139e0a18e992263592019fac2b5c1103cfb6994610233e0e232" Jan 30 07:57:22 crc kubenswrapper[4520]: E0130 07:57:22.481814 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bb6666b16c51139e0a18e992263592019fac2b5c1103cfb6994610233e0e232\": container with ID starting with 1bb6666b16c51139e0a18e992263592019fac2b5c1103cfb6994610233e0e232 not found: ID does not exist" containerID="1bb6666b16c51139e0a18e992263592019fac2b5c1103cfb6994610233e0e232" Jan 30 07:57:22 crc kubenswrapper[4520]: I0130 07:57:22.481854 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bb6666b16c51139e0a18e992263592019fac2b5c1103cfb6994610233e0e232"} err="failed to get container status \"1bb6666b16c51139e0a18e992263592019fac2b5c1103cfb6994610233e0e232\": rpc error: code = NotFound desc = could not find container \"1bb6666b16c51139e0a18e992263592019fac2b5c1103cfb6994610233e0e232\": container with ID starting with 1bb6666b16c51139e0a18e992263592019fac2b5c1103cfb6994610233e0e232 not found: ID does not exist" Jan 30 07:57:22 crc kubenswrapper[4520]: I0130 07:57:22.481889 4520 scope.go:117] "RemoveContainer" containerID="55b56db2adb6c9c697726b7b8cb072f4d3627ca37d0b6d381f450c431dc5f231" Jan 30 07:57:22 crc kubenswrapper[4520]: E0130 07:57:22.482200 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55b56db2adb6c9c697726b7b8cb072f4d3627ca37d0b6d381f450c431dc5f231\": container with ID starting with 55b56db2adb6c9c697726b7b8cb072f4d3627ca37d0b6d381f450c431dc5f231 not found: ID does not exist" containerID="55b56db2adb6c9c697726b7b8cb072f4d3627ca37d0b6d381f450c431dc5f231" Jan 30 07:57:22 crc kubenswrapper[4520]: I0130 07:57:22.482241 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55b56db2adb6c9c697726b7b8cb072f4d3627ca37d0b6d381f450c431dc5f231"} err="failed to get container status \"55b56db2adb6c9c697726b7b8cb072f4d3627ca37d0b6d381f450c431dc5f231\": rpc error: code = NotFound desc = could not find container \"55b56db2adb6c9c697726b7b8cb072f4d3627ca37d0b6d381f450c431dc5f231\": container with ID starting with 55b56db2adb6c9c697726b7b8cb072f4d3627ca37d0b6d381f450c431dc5f231 not found: ID does not exist" Jan 30 07:57:22 crc kubenswrapper[4520]: I0130 07:57:22.702333 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ae981d0-8076-40c8-8c37-6effc13d65ec" path="/var/lib/kubelet/pods/9ae981d0-8076-40c8-8c37-6effc13d65ec/volumes" Jan 30 07:57:24 crc kubenswrapper[4520]: I0130 07:57:24.522923 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fxmcq"] Jan 30 07:57:24 crc kubenswrapper[4520]: E0130 07:57:24.523757 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae981d0-8076-40c8-8c37-6effc13d65ec" containerName="extract-utilities" Jan 30 07:57:24 crc kubenswrapper[4520]: I0130 07:57:24.523772 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae981d0-8076-40c8-8c37-6effc13d65ec" containerName="extract-utilities" Jan 30 07:57:24 crc kubenswrapper[4520]: E0130 07:57:24.523795 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae981d0-8076-40c8-8c37-6effc13d65ec" containerName="extract-content" Jan 30 07:57:24 crc kubenswrapper[4520]: I0130 07:57:24.523800 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae981d0-8076-40c8-8c37-6effc13d65ec" containerName="extract-content" Jan 30 07:57:24 crc kubenswrapper[4520]: E0130 07:57:24.523815 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae981d0-8076-40c8-8c37-6effc13d65ec" containerName="registry-server" Jan 30 07:57:24 crc kubenswrapper[4520]: I0130 07:57:24.523821 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae981d0-8076-40c8-8c37-6effc13d65ec" containerName="registry-server" Jan 30 07:57:24 crc kubenswrapper[4520]: I0130 07:57:24.527299 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ae981d0-8076-40c8-8c37-6effc13d65ec" containerName="registry-server" Jan 30 07:57:24 crc kubenswrapper[4520]: I0130 07:57:24.528668 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fxmcq" Jan 30 07:57:24 crc kubenswrapper[4520]: I0130 07:57:24.540990 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fxmcq"] Jan 30 07:57:24 crc kubenswrapper[4520]: I0130 07:57:24.571289 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f078fc21-9400-4079-bf51-c2d4723c574a-catalog-content\") pod \"community-operators-fxmcq\" (UID: \"f078fc21-9400-4079-bf51-c2d4723c574a\") " pod="openshift-marketplace/community-operators-fxmcq" Jan 30 07:57:24 crc kubenswrapper[4520]: I0130 07:57:24.571432 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm7ht\" (UniqueName: \"kubernetes.io/projected/f078fc21-9400-4079-bf51-c2d4723c574a-kube-api-access-lm7ht\") pod \"community-operators-fxmcq\" (UID: \"f078fc21-9400-4079-bf51-c2d4723c574a\") " pod="openshift-marketplace/community-operators-fxmcq" Jan 30 07:57:24 crc kubenswrapper[4520]: I0130 07:57:24.571457 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f078fc21-9400-4079-bf51-c2d4723c574a-utilities\") pod \"community-operators-fxmcq\" (UID: \"f078fc21-9400-4079-bf51-c2d4723c574a\") " pod="openshift-marketplace/community-operators-fxmcq" Jan 30 07:57:24 crc kubenswrapper[4520]: I0130 07:57:24.673117 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f078fc21-9400-4079-bf51-c2d4723c574a-catalog-content\") pod \"community-operators-fxmcq\" (UID: \"f078fc21-9400-4079-bf51-c2d4723c574a\") " pod="openshift-marketplace/community-operators-fxmcq" Jan 30 07:57:24 crc kubenswrapper[4520]: I0130 07:57:24.673226 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm7ht\" (UniqueName: \"kubernetes.io/projected/f078fc21-9400-4079-bf51-c2d4723c574a-kube-api-access-lm7ht\") pod \"community-operators-fxmcq\" (UID: \"f078fc21-9400-4079-bf51-c2d4723c574a\") " pod="openshift-marketplace/community-operators-fxmcq" Jan 30 07:57:24 crc kubenswrapper[4520]: I0130 07:57:24.673250 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f078fc21-9400-4079-bf51-c2d4723c574a-utilities\") pod \"community-operators-fxmcq\" (UID: \"f078fc21-9400-4079-bf51-c2d4723c574a\") " pod="openshift-marketplace/community-operators-fxmcq" Jan 30 07:57:24 crc kubenswrapper[4520]: I0130 07:57:24.673686 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f078fc21-9400-4079-bf51-c2d4723c574a-utilities\") pod \"community-operators-fxmcq\" (UID: \"f078fc21-9400-4079-bf51-c2d4723c574a\") " pod="openshift-marketplace/community-operators-fxmcq" Jan 30 07:57:24 crc kubenswrapper[4520]: I0130 07:57:24.673873 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f078fc21-9400-4079-bf51-c2d4723c574a-catalog-content\") pod \"community-operators-fxmcq\" (UID: \"f078fc21-9400-4079-bf51-c2d4723c574a\") " pod="openshift-marketplace/community-operators-fxmcq" Jan 30 07:57:24 crc kubenswrapper[4520]: I0130 07:57:24.698405 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm7ht\" (UniqueName: \"kubernetes.io/projected/f078fc21-9400-4079-bf51-c2d4723c574a-kube-api-access-lm7ht\") pod \"community-operators-fxmcq\" (UID: \"f078fc21-9400-4079-bf51-c2d4723c574a\") " pod="openshift-marketplace/community-operators-fxmcq" Jan 30 07:57:24 crc kubenswrapper[4520]: I0130 07:57:24.843954 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fxmcq" Jan 30 07:57:25 crc kubenswrapper[4520]: I0130 07:57:25.533988 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fxmcq"] Jan 30 07:57:26 crc kubenswrapper[4520]: I0130 07:57:26.454784 4520 generic.go:334] "Generic (PLEG): container finished" podID="f078fc21-9400-4079-bf51-c2d4723c574a" containerID="0f5fcaa82319eec48cdc7c202d571c14469bdd7d19fb0cb5bff06c61079a9533" exitCode=0 Jan 30 07:57:26 crc kubenswrapper[4520]: I0130 07:57:26.458158 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fxmcq" event={"ID":"f078fc21-9400-4079-bf51-c2d4723c574a","Type":"ContainerDied","Data":"0f5fcaa82319eec48cdc7c202d571c14469bdd7d19fb0cb5bff06c61079a9533"} Jan 30 07:57:26 crc kubenswrapper[4520]: I0130 07:57:26.462716 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fxmcq" event={"ID":"f078fc21-9400-4079-bf51-c2d4723c574a","Type":"ContainerStarted","Data":"098b371b4f42f7814f6aed8ae74d0903cab77572c87799fae4ef1293a0c6805d"} Jan 30 07:57:27 crc kubenswrapper[4520]: I0130 07:57:27.471737 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fxmcq" event={"ID":"f078fc21-9400-4079-bf51-c2d4723c574a","Type":"ContainerStarted","Data":"604f166ebddcec03e41862e7f9d7cb8b611421a893ae635f0323000a1b47b03c"} Jan 30 07:57:27 crc kubenswrapper[4520]: I0130 07:57:27.793412 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 07:57:27 crc kubenswrapper[4520]: I0130 07:57:27.793537 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 07:57:27 crc kubenswrapper[4520]: I0130 07:57:27.793633 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" Jan 30 07:57:27 crc kubenswrapper[4520]: I0130 07:57:27.795153 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"77cd5b87e2a7d3d7353f16141d94f2ae98b277110007125a51abfb7eb1a1a076"} pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 07:57:27 crc kubenswrapper[4520]: I0130 07:57:27.795245 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" containerID="cri-o://77cd5b87e2a7d3d7353f16141d94f2ae98b277110007125a51abfb7eb1a1a076" gracePeriod=600 Jan 30 07:57:28 crc kubenswrapper[4520]: I0130 07:57:28.479917 4520 generic.go:334] "Generic (PLEG): container finished" podID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerID="77cd5b87e2a7d3d7353f16141d94f2ae98b277110007125a51abfb7eb1a1a076" exitCode=0 Jan 30 07:57:28 crc kubenswrapper[4520]: I0130 07:57:28.479951 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerDied","Data":"77cd5b87e2a7d3d7353f16141d94f2ae98b277110007125a51abfb7eb1a1a076"} Jan 30 07:57:28 crc kubenswrapper[4520]: I0130 07:57:28.480236 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerStarted","Data":"75e27a74f8e186a71cf9d6ec9744ee261fd1d47cfa9e79349b0e31fc7178aa81"} Jan 30 07:57:28 crc kubenswrapper[4520]: I0130 07:57:28.480253 4520 scope.go:117] "RemoveContainer" containerID="b5e2c9a8b5c47ce871755d1f4913e2df4198ac5cdccac62ebb1db39109e44b6d" Jan 30 07:57:29 crc kubenswrapper[4520]: I0130 07:57:29.491004 4520 generic.go:334] "Generic (PLEG): container finished" podID="f078fc21-9400-4079-bf51-c2d4723c574a" containerID="604f166ebddcec03e41862e7f9d7cb8b611421a893ae635f0323000a1b47b03c" exitCode=0 Jan 30 07:57:29 crc kubenswrapper[4520]: I0130 07:57:29.491114 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fxmcq" event={"ID":"f078fc21-9400-4079-bf51-c2d4723c574a","Type":"ContainerDied","Data":"604f166ebddcec03e41862e7f9d7cb8b611421a893ae635f0323000a1b47b03c"} Jan 30 07:57:30 crc kubenswrapper[4520]: I0130 07:57:30.511865 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fxmcq" event={"ID":"f078fc21-9400-4079-bf51-c2d4723c574a","Type":"ContainerStarted","Data":"eac0aebebc26ef1079b20b1291db8cedeb3064593aa84dd171b3441f02ca87ee"} Jan 30 07:57:30 crc kubenswrapper[4520]: I0130 07:57:30.545539 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fxmcq" podStartSLOduration=2.955663333 podStartE2EDuration="6.545465592s" podCreationTimestamp="2026-01-30 07:57:24 +0000 UTC" firstStartedPulling="2026-01-30 07:57:26.465395086 +0000 UTC m=+4360.093747267" lastFinishedPulling="2026-01-30 07:57:30.055197345 +0000 UTC m=+4363.683549526" observedRunningTime="2026-01-30 07:57:30.540275794 +0000 UTC m=+4364.168627975" watchObservedRunningTime="2026-01-30 07:57:30.545465592 +0000 UTC m=+4364.173817772" Jan 30 07:57:34 crc kubenswrapper[4520]: I0130 07:57:34.844836 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fxmcq" Jan 30 07:57:34 crc kubenswrapper[4520]: I0130 07:57:34.845659 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fxmcq" Jan 30 07:57:35 crc kubenswrapper[4520]: I0130 07:57:35.295230 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fxmcq" Jan 30 07:57:35 crc kubenswrapper[4520]: I0130 07:57:35.640624 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fxmcq" Jan 30 07:57:35 crc kubenswrapper[4520]: I0130 07:57:35.714868 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fxmcq"] Jan 30 07:57:37 crc kubenswrapper[4520]: I0130 07:57:37.568438 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fxmcq" podUID="f078fc21-9400-4079-bf51-c2d4723c574a" containerName="registry-server" containerID="cri-o://eac0aebebc26ef1079b20b1291db8cedeb3064593aa84dd171b3441f02ca87ee" gracePeriod=2 Jan 30 07:57:38 crc kubenswrapper[4520]: I0130 07:57:38.014026 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fxmcq" Jan 30 07:57:38 crc kubenswrapper[4520]: I0130 07:57:38.117402 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lm7ht\" (UniqueName: \"kubernetes.io/projected/f078fc21-9400-4079-bf51-c2d4723c574a-kube-api-access-lm7ht\") pod \"f078fc21-9400-4079-bf51-c2d4723c574a\" (UID: \"f078fc21-9400-4079-bf51-c2d4723c574a\") " Jan 30 07:57:38 crc kubenswrapper[4520]: I0130 07:57:38.117679 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f078fc21-9400-4079-bf51-c2d4723c574a-utilities\") pod \"f078fc21-9400-4079-bf51-c2d4723c574a\" (UID: \"f078fc21-9400-4079-bf51-c2d4723c574a\") " Jan 30 07:57:38 crc kubenswrapper[4520]: I0130 07:57:38.117717 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f078fc21-9400-4079-bf51-c2d4723c574a-catalog-content\") pod \"f078fc21-9400-4079-bf51-c2d4723c574a\" (UID: \"f078fc21-9400-4079-bf51-c2d4723c574a\") " Jan 30 07:57:38 crc kubenswrapper[4520]: I0130 07:57:38.118206 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f078fc21-9400-4079-bf51-c2d4723c574a-utilities" (OuterVolumeSpecName: "utilities") pod "f078fc21-9400-4079-bf51-c2d4723c574a" (UID: "f078fc21-9400-4079-bf51-c2d4723c574a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:57:38 crc kubenswrapper[4520]: I0130 07:57:38.118760 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f078fc21-9400-4079-bf51-c2d4723c574a-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 07:57:38 crc kubenswrapper[4520]: I0130 07:57:38.125553 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f078fc21-9400-4079-bf51-c2d4723c574a-kube-api-access-lm7ht" (OuterVolumeSpecName: "kube-api-access-lm7ht") pod "f078fc21-9400-4079-bf51-c2d4723c574a" (UID: "f078fc21-9400-4079-bf51-c2d4723c574a"). InnerVolumeSpecName "kube-api-access-lm7ht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:57:38 crc kubenswrapper[4520]: I0130 07:57:38.158469 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f078fc21-9400-4079-bf51-c2d4723c574a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f078fc21-9400-4079-bf51-c2d4723c574a" (UID: "f078fc21-9400-4079-bf51-c2d4723c574a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:57:38 crc kubenswrapper[4520]: I0130 07:57:38.221275 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f078fc21-9400-4079-bf51-c2d4723c574a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 07:57:38 crc kubenswrapper[4520]: I0130 07:57:38.221342 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lm7ht\" (UniqueName: \"kubernetes.io/projected/f078fc21-9400-4079-bf51-c2d4723c574a-kube-api-access-lm7ht\") on node \"crc\" DevicePath \"\"" Jan 30 07:57:38 crc kubenswrapper[4520]: I0130 07:57:38.580616 4520 generic.go:334] "Generic (PLEG): container finished" podID="f078fc21-9400-4079-bf51-c2d4723c574a" containerID="eac0aebebc26ef1079b20b1291db8cedeb3064593aa84dd171b3441f02ca87ee" exitCode=0 Jan 30 07:57:38 crc kubenswrapper[4520]: I0130 07:57:38.580706 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fxmcq" Jan 30 07:57:38 crc kubenswrapper[4520]: I0130 07:57:38.580678 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fxmcq" event={"ID":"f078fc21-9400-4079-bf51-c2d4723c574a","Type":"ContainerDied","Data":"eac0aebebc26ef1079b20b1291db8cedeb3064593aa84dd171b3441f02ca87ee"} Jan 30 07:57:38 crc kubenswrapper[4520]: I0130 07:57:38.581556 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fxmcq" event={"ID":"f078fc21-9400-4079-bf51-c2d4723c574a","Type":"ContainerDied","Data":"098b371b4f42f7814f6aed8ae74d0903cab77572c87799fae4ef1293a0c6805d"} Jan 30 07:57:38 crc kubenswrapper[4520]: I0130 07:57:38.581625 4520 scope.go:117] "RemoveContainer" containerID="eac0aebebc26ef1079b20b1291db8cedeb3064593aa84dd171b3441f02ca87ee" Jan 30 07:57:38 crc kubenswrapper[4520]: I0130 07:57:38.613478 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fxmcq"] Jan 30 07:57:38 crc kubenswrapper[4520]: I0130 07:57:38.617039 4520 scope.go:117] "RemoveContainer" containerID="604f166ebddcec03e41862e7f9d7cb8b611421a893ae635f0323000a1b47b03c" Jan 30 07:57:38 crc kubenswrapper[4520]: I0130 07:57:38.619485 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fxmcq"] Jan 30 07:57:38 crc kubenswrapper[4520]: I0130 07:57:38.665593 4520 scope.go:117] "RemoveContainer" containerID="0f5fcaa82319eec48cdc7c202d571c14469bdd7d19fb0cb5bff06c61079a9533" Jan 30 07:57:38 crc kubenswrapper[4520]: I0130 07:57:38.688094 4520 scope.go:117] "RemoveContainer" containerID="eac0aebebc26ef1079b20b1291db8cedeb3064593aa84dd171b3441f02ca87ee" Jan 30 07:57:38 crc kubenswrapper[4520]: E0130 07:57:38.688599 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eac0aebebc26ef1079b20b1291db8cedeb3064593aa84dd171b3441f02ca87ee\": container with ID starting with eac0aebebc26ef1079b20b1291db8cedeb3064593aa84dd171b3441f02ca87ee not found: ID does not exist" containerID="eac0aebebc26ef1079b20b1291db8cedeb3064593aa84dd171b3441f02ca87ee" Jan 30 07:57:38 crc kubenswrapper[4520]: I0130 07:57:38.688647 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eac0aebebc26ef1079b20b1291db8cedeb3064593aa84dd171b3441f02ca87ee"} err="failed to get container status \"eac0aebebc26ef1079b20b1291db8cedeb3064593aa84dd171b3441f02ca87ee\": rpc error: code = NotFound desc = could not find container \"eac0aebebc26ef1079b20b1291db8cedeb3064593aa84dd171b3441f02ca87ee\": container with ID starting with eac0aebebc26ef1079b20b1291db8cedeb3064593aa84dd171b3441f02ca87ee not found: ID does not exist" Jan 30 07:57:38 crc kubenswrapper[4520]: I0130 07:57:38.688682 4520 scope.go:117] "RemoveContainer" containerID="604f166ebddcec03e41862e7f9d7cb8b611421a893ae635f0323000a1b47b03c" Jan 30 07:57:38 crc kubenswrapper[4520]: E0130 07:57:38.688970 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"604f166ebddcec03e41862e7f9d7cb8b611421a893ae635f0323000a1b47b03c\": container with ID starting with 604f166ebddcec03e41862e7f9d7cb8b611421a893ae635f0323000a1b47b03c not found: ID does not exist" containerID="604f166ebddcec03e41862e7f9d7cb8b611421a893ae635f0323000a1b47b03c" Jan 30 07:57:38 crc kubenswrapper[4520]: I0130 07:57:38.689002 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"604f166ebddcec03e41862e7f9d7cb8b611421a893ae635f0323000a1b47b03c"} err="failed to get container status \"604f166ebddcec03e41862e7f9d7cb8b611421a893ae635f0323000a1b47b03c\": rpc error: code = NotFound desc = could not find container \"604f166ebddcec03e41862e7f9d7cb8b611421a893ae635f0323000a1b47b03c\": container with ID starting with 604f166ebddcec03e41862e7f9d7cb8b611421a893ae635f0323000a1b47b03c not found: ID does not exist" Jan 30 07:57:38 crc kubenswrapper[4520]: I0130 07:57:38.689023 4520 scope.go:117] "RemoveContainer" containerID="0f5fcaa82319eec48cdc7c202d571c14469bdd7d19fb0cb5bff06c61079a9533" Jan 30 07:57:38 crc kubenswrapper[4520]: E0130 07:57:38.690295 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f5fcaa82319eec48cdc7c202d571c14469bdd7d19fb0cb5bff06c61079a9533\": container with ID starting with 0f5fcaa82319eec48cdc7c202d571c14469bdd7d19fb0cb5bff06c61079a9533 not found: ID does not exist" containerID="0f5fcaa82319eec48cdc7c202d571c14469bdd7d19fb0cb5bff06c61079a9533" Jan 30 07:57:38 crc kubenswrapper[4520]: I0130 07:57:38.690337 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f5fcaa82319eec48cdc7c202d571c14469bdd7d19fb0cb5bff06c61079a9533"} err="failed to get container status \"0f5fcaa82319eec48cdc7c202d571c14469bdd7d19fb0cb5bff06c61079a9533\": rpc error: code = NotFound desc = could not find container \"0f5fcaa82319eec48cdc7c202d571c14469bdd7d19fb0cb5bff06c61079a9533\": container with ID starting with 0f5fcaa82319eec48cdc7c202d571c14469bdd7d19fb0cb5bff06c61079a9533 not found: ID does not exist" Jan 30 07:57:38 crc kubenswrapper[4520]: I0130 07:57:38.695871 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f078fc21-9400-4079-bf51-c2d4723c574a" path="/var/lib/kubelet/pods/f078fc21-9400-4079-bf51-c2d4723c574a/volumes" Jan 30 07:59:57 crc kubenswrapper[4520]: I0130 07:59:57.793459 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 07:59:57 crc kubenswrapper[4520]: I0130 07:59:57.793960 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:00:00 crc kubenswrapper[4520]: I0130 08:00:00.237332 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496000-dxf7n"] Jan 30 08:00:00 crc kubenswrapper[4520]: E0130 08:00:00.238193 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f078fc21-9400-4079-bf51-c2d4723c574a" containerName="registry-server" Jan 30 08:00:00 crc kubenswrapper[4520]: I0130 08:00:00.238206 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="f078fc21-9400-4079-bf51-c2d4723c574a" containerName="registry-server" Jan 30 08:00:00 crc kubenswrapper[4520]: E0130 08:00:00.238220 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f078fc21-9400-4079-bf51-c2d4723c574a" containerName="extract-utilities" Jan 30 08:00:00 crc kubenswrapper[4520]: I0130 08:00:00.238226 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="f078fc21-9400-4079-bf51-c2d4723c574a" containerName="extract-utilities" Jan 30 08:00:00 crc kubenswrapper[4520]: E0130 08:00:00.238247 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f078fc21-9400-4079-bf51-c2d4723c574a" containerName="extract-content" Jan 30 08:00:00 crc kubenswrapper[4520]: I0130 08:00:00.238254 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="f078fc21-9400-4079-bf51-c2d4723c574a" containerName="extract-content" Jan 30 08:00:00 crc kubenswrapper[4520]: I0130 08:00:00.238403 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="f078fc21-9400-4079-bf51-c2d4723c574a" containerName="registry-server" Jan 30 08:00:00 crc kubenswrapper[4520]: I0130 08:00:00.239006 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496000-dxf7n" Jan 30 08:00:00 crc kubenswrapper[4520]: I0130 08:00:00.251323 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496000-dxf7n"] Jan 30 08:00:00 crc kubenswrapper[4520]: I0130 08:00:00.253157 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34c79b19-041d-49f3-b54f-7e91d60f0439-config-volume\") pod \"collect-profiles-29496000-dxf7n\" (UID: \"34c79b19-041d-49f3-b54f-7e91d60f0439\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496000-dxf7n" Jan 30 08:00:00 crc kubenswrapper[4520]: I0130 08:00:00.253264 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmb65\" (UniqueName: \"kubernetes.io/projected/34c79b19-041d-49f3-b54f-7e91d60f0439-kube-api-access-hmb65\") pod \"collect-profiles-29496000-dxf7n\" (UID: \"34c79b19-041d-49f3-b54f-7e91d60f0439\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496000-dxf7n" Jan 30 08:00:00 crc kubenswrapper[4520]: I0130 08:00:00.253342 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34c79b19-041d-49f3-b54f-7e91d60f0439-secret-volume\") pod \"collect-profiles-29496000-dxf7n\" (UID: \"34c79b19-041d-49f3-b54f-7e91d60f0439\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496000-dxf7n" Jan 30 08:00:00 crc kubenswrapper[4520]: I0130 08:00:00.254611 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 08:00:00 crc kubenswrapper[4520]: I0130 08:00:00.255122 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 08:00:00 crc kubenswrapper[4520]: I0130 08:00:00.357641 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmb65\" (UniqueName: \"kubernetes.io/projected/34c79b19-041d-49f3-b54f-7e91d60f0439-kube-api-access-hmb65\") pod \"collect-profiles-29496000-dxf7n\" (UID: \"34c79b19-041d-49f3-b54f-7e91d60f0439\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496000-dxf7n" Jan 30 08:00:00 crc kubenswrapper[4520]: I0130 08:00:00.357830 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34c79b19-041d-49f3-b54f-7e91d60f0439-secret-volume\") pod \"collect-profiles-29496000-dxf7n\" (UID: \"34c79b19-041d-49f3-b54f-7e91d60f0439\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496000-dxf7n" Jan 30 08:00:00 crc kubenswrapper[4520]: I0130 08:00:00.358004 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34c79b19-041d-49f3-b54f-7e91d60f0439-config-volume\") pod \"collect-profiles-29496000-dxf7n\" (UID: \"34c79b19-041d-49f3-b54f-7e91d60f0439\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496000-dxf7n" Jan 30 08:00:00 crc kubenswrapper[4520]: I0130 08:00:00.359051 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34c79b19-041d-49f3-b54f-7e91d60f0439-config-volume\") pod \"collect-profiles-29496000-dxf7n\" (UID: \"34c79b19-041d-49f3-b54f-7e91d60f0439\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496000-dxf7n" Jan 30 08:00:00 crc kubenswrapper[4520]: I0130 08:00:00.377226 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34c79b19-041d-49f3-b54f-7e91d60f0439-secret-volume\") pod \"collect-profiles-29496000-dxf7n\" (UID: \"34c79b19-041d-49f3-b54f-7e91d60f0439\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496000-dxf7n" Jan 30 08:00:00 crc kubenswrapper[4520]: I0130 08:00:00.378170 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmb65\" (UniqueName: \"kubernetes.io/projected/34c79b19-041d-49f3-b54f-7e91d60f0439-kube-api-access-hmb65\") pod \"collect-profiles-29496000-dxf7n\" (UID: \"34c79b19-041d-49f3-b54f-7e91d60f0439\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496000-dxf7n" Jan 30 08:00:00 crc kubenswrapper[4520]: I0130 08:00:00.568358 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496000-dxf7n" Jan 30 08:00:01 crc kubenswrapper[4520]: I0130 08:00:01.003689 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496000-dxf7n"] Jan 30 08:00:01 crc kubenswrapper[4520]: W0130 08:00:01.006325 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34c79b19_041d_49f3_b54f_7e91d60f0439.slice/crio-c474a3708ff0d9be6b32c7a4b76cc5aa6479476f738a4ee106f28d13d040597d WatchSource:0}: Error finding container c474a3708ff0d9be6b32c7a4b76cc5aa6479476f738a4ee106f28d13d040597d: Status 404 returned error can't find the container with id c474a3708ff0d9be6b32c7a4b76cc5aa6479476f738a4ee106f28d13d040597d Jan 30 08:00:01 crc kubenswrapper[4520]: I0130 08:00:01.759021 4520 generic.go:334] "Generic (PLEG): container finished" podID="34c79b19-041d-49f3-b54f-7e91d60f0439" containerID="f6fe5cee399d461b5a50f76ce533722ded57424fc3419f982fb056acf4dfa615" exitCode=0 Jan 30 08:00:01 crc kubenswrapper[4520]: I0130 08:00:01.759191 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496000-dxf7n" event={"ID":"34c79b19-041d-49f3-b54f-7e91d60f0439","Type":"ContainerDied","Data":"f6fe5cee399d461b5a50f76ce533722ded57424fc3419f982fb056acf4dfa615"} Jan 30 08:00:01 crc kubenswrapper[4520]: I0130 08:00:01.760352 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496000-dxf7n" event={"ID":"34c79b19-041d-49f3-b54f-7e91d60f0439","Type":"ContainerStarted","Data":"c474a3708ff0d9be6b32c7a4b76cc5aa6479476f738a4ee106f28d13d040597d"} Jan 30 08:00:03 crc kubenswrapper[4520]: I0130 08:00:03.077406 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496000-dxf7n" Jan 30 08:00:03 crc kubenswrapper[4520]: I0130 08:00:03.116623 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34c79b19-041d-49f3-b54f-7e91d60f0439-config-volume\") pod \"34c79b19-041d-49f3-b54f-7e91d60f0439\" (UID: \"34c79b19-041d-49f3-b54f-7e91d60f0439\") " Jan 30 08:00:03 crc kubenswrapper[4520]: I0130 08:00:03.116917 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmb65\" (UniqueName: \"kubernetes.io/projected/34c79b19-041d-49f3-b54f-7e91d60f0439-kube-api-access-hmb65\") pod \"34c79b19-041d-49f3-b54f-7e91d60f0439\" (UID: \"34c79b19-041d-49f3-b54f-7e91d60f0439\") " Jan 30 08:00:03 crc kubenswrapper[4520]: I0130 08:00:03.116998 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34c79b19-041d-49f3-b54f-7e91d60f0439-secret-volume\") pod \"34c79b19-041d-49f3-b54f-7e91d60f0439\" (UID: \"34c79b19-041d-49f3-b54f-7e91d60f0439\") " Jan 30 08:00:03 crc kubenswrapper[4520]: I0130 08:00:03.117129 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34c79b19-041d-49f3-b54f-7e91d60f0439-config-volume" (OuterVolumeSpecName: "config-volume") pod "34c79b19-041d-49f3-b54f-7e91d60f0439" (UID: "34c79b19-041d-49f3-b54f-7e91d60f0439"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:00:03 crc kubenswrapper[4520]: I0130 08:00:03.117435 4520 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34c79b19-041d-49f3-b54f-7e91d60f0439-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 08:00:03 crc kubenswrapper[4520]: I0130 08:00:03.124623 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34c79b19-041d-49f3-b54f-7e91d60f0439-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "34c79b19-041d-49f3-b54f-7e91d60f0439" (UID: "34c79b19-041d-49f3-b54f-7e91d60f0439"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:00:03 crc kubenswrapper[4520]: I0130 08:00:03.124765 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34c79b19-041d-49f3-b54f-7e91d60f0439-kube-api-access-hmb65" (OuterVolumeSpecName: "kube-api-access-hmb65") pod "34c79b19-041d-49f3-b54f-7e91d60f0439" (UID: "34c79b19-041d-49f3-b54f-7e91d60f0439"). InnerVolumeSpecName "kube-api-access-hmb65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:00:03 crc kubenswrapper[4520]: I0130 08:00:03.221036 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmb65\" (UniqueName: \"kubernetes.io/projected/34c79b19-041d-49f3-b54f-7e91d60f0439-kube-api-access-hmb65\") on node \"crc\" DevicePath \"\"" Jan 30 08:00:03 crc kubenswrapper[4520]: I0130 08:00:03.221088 4520 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34c79b19-041d-49f3-b54f-7e91d60f0439-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 08:00:03 crc kubenswrapper[4520]: I0130 08:00:03.778043 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496000-dxf7n" event={"ID":"34c79b19-041d-49f3-b54f-7e91d60f0439","Type":"ContainerDied","Data":"c474a3708ff0d9be6b32c7a4b76cc5aa6479476f738a4ee106f28d13d040597d"} Jan 30 08:00:03 crc kubenswrapper[4520]: I0130 08:00:03.778164 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496000-dxf7n" Jan 30 08:00:03 crc kubenswrapper[4520]: I0130 08:00:03.778111 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c474a3708ff0d9be6b32c7a4b76cc5aa6479476f738a4ee106f28d13d040597d" Jan 30 08:00:04 crc kubenswrapper[4520]: I0130 08:00:04.167287 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495955-rkfmw"] Jan 30 08:00:04 crc kubenswrapper[4520]: I0130 08:00:04.179280 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495955-rkfmw"] Jan 30 08:00:04 crc kubenswrapper[4520]: I0130 08:00:04.695617 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc3e954a-9302-42c8-a729-5d277eb821fc" path="/var/lib/kubelet/pods/bc3e954a-9302-42c8-a729-5d277eb821fc/volumes" Jan 30 08:00:27 crc kubenswrapper[4520]: I0130 08:00:27.793430 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:00:27 crc kubenswrapper[4520]: I0130 08:00:27.794082 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:00:57 crc kubenswrapper[4520]: I0130 08:00:57.469806 4520 scope.go:117] "RemoveContainer" containerID="e8ad25f22891e130911a01be1386a450861f97eed8b68071c5a6ce19fb4d3fa3" Jan 30 08:00:57 crc kubenswrapper[4520]: I0130 08:00:57.794124 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:00:57 crc kubenswrapper[4520]: I0130 08:00:57.794480 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:00:57 crc kubenswrapper[4520]: I0130 08:00:57.794549 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" Jan 30 08:00:57 crc kubenswrapper[4520]: I0130 08:00:57.795498 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"75e27a74f8e186a71cf9d6ec9744ee261fd1d47cfa9e79349b0e31fc7178aa81"} pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 08:00:57 crc kubenswrapper[4520]: I0130 08:00:57.795581 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" containerID="cri-o://75e27a74f8e186a71cf9d6ec9744ee261fd1d47cfa9e79349b0e31fc7178aa81" gracePeriod=600 Jan 30 08:00:57 crc kubenswrapper[4520]: E0130 08:00:57.917369 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:00:58 crc kubenswrapper[4520]: I0130 08:00:58.233183 4520 generic.go:334] "Generic (PLEG): container finished" podID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerID="75e27a74f8e186a71cf9d6ec9744ee261fd1d47cfa9e79349b0e31fc7178aa81" exitCode=0 Jan 30 08:00:58 crc kubenswrapper[4520]: I0130 08:00:58.233201 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerDied","Data":"75e27a74f8e186a71cf9d6ec9744ee261fd1d47cfa9e79349b0e31fc7178aa81"} Jan 30 08:00:58 crc kubenswrapper[4520]: I0130 08:00:58.233274 4520 scope.go:117] "RemoveContainer" containerID="77cd5b87e2a7d3d7353f16141d94f2ae98b277110007125a51abfb7eb1a1a076" Jan 30 08:00:58 crc kubenswrapper[4520]: I0130 08:00:58.233925 4520 scope.go:117] "RemoveContainer" containerID="75e27a74f8e186a71cf9d6ec9744ee261fd1d47cfa9e79349b0e31fc7178aa81" Jan 30 08:00:58 crc kubenswrapper[4520]: E0130 08:00:58.234157 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:01:00 crc kubenswrapper[4520]: I0130 08:01:00.138989 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29496001-zcfch"] Jan 30 08:01:00 crc kubenswrapper[4520]: E0130 08:01:00.139606 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34c79b19-041d-49f3-b54f-7e91d60f0439" containerName="collect-profiles" Jan 30 08:01:00 crc kubenswrapper[4520]: I0130 08:01:00.139631 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="34c79b19-041d-49f3-b54f-7e91d60f0439" containerName="collect-profiles" Jan 30 08:01:00 crc kubenswrapper[4520]: I0130 08:01:00.139811 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="34c79b19-041d-49f3-b54f-7e91d60f0439" containerName="collect-profiles" Jan 30 08:01:00 crc kubenswrapper[4520]: I0130 08:01:00.141588 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496001-zcfch" Jan 30 08:01:00 crc kubenswrapper[4520]: I0130 08:01:00.151770 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496001-zcfch"] Jan 30 08:01:00 crc kubenswrapper[4520]: I0130 08:01:00.273328 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df6a154c-b04d-43ff-bd3e-fc2cac82373c-fernet-keys\") pod \"keystone-cron-29496001-zcfch\" (UID: \"df6a154c-b04d-43ff-bd3e-fc2cac82373c\") " pod="openstack/keystone-cron-29496001-zcfch" Jan 30 08:01:00 crc kubenswrapper[4520]: I0130 08:01:00.273556 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df6a154c-b04d-43ff-bd3e-fc2cac82373c-combined-ca-bundle\") pod \"keystone-cron-29496001-zcfch\" (UID: \"df6a154c-b04d-43ff-bd3e-fc2cac82373c\") " pod="openstack/keystone-cron-29496001-zcfch" Jan 30 08:01:00 crc kubenswrapper[4520]: I0130 08:01:00.273645 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpt47\" (UniqueName: \"kubernetes.io/projected/df6a154c-b04d-43ff-bd3e-fc2cac82373c-kube-api-access-vpt47\") pod \"keystone-cron-29496001-zcfch\" (UID: \"df6a154c-b04d-43ff-bd3e-fc2cac82373c\") " pod="openstack/keystone-cron-29496001-zcfch" Jan 30 08:01:00 crc kubenswrapper[4520]: I0130 08:01:00.273683 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df6a154c-b04d-43ff-bd3e-fc2cac82373c-config-data\") pod \"keystone-cron-29496001-zcfch\" (UID: \"df6a154c-b04d-43ff-bd3e-fc2cac82373c\") " pod="openstack/keystone-cron-29496001-zcfch" Jan 30 08:01:00 crc kubenswrapper[4520]: I0130 08:01:00.376289 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df6a154c-b04d-43ff-bd3e-fc2cac82373c-fernet-keys\") pod \"keystone-cron-29496001-zcfch\" (UID: \"df6a154c-b04d-43ff-bd3e-fc2cac82373c\") " pod="openstack/keystone-cron-29496001-zcfch" Jan 30 08:01:00 crc kubenswrapper[4520]: I0130 08:01:00.376425 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df6a154c-b04d-43ff-bd3e-fc2cac82373c-combined-ca-bundle\") pod \"keystone-cron-29496001-zcfch\" (UID: \"df6a154c-b04d-43ff-bd3e-fc2cac82373c\") " pod="openstack/keystone-cron-29496001-zcfch" Jan 30 08:01:00 crc kubenswrapper[4520]: I0130 08:01:00.376489 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpt47\" (UniqueName: \"kubernetes.io/projected/df6a154c-b04d-43ff-bd3e-fc2cac82373c-kube-api-access-vpt47\") pod \"keystone-cron-29496001-zcfch\" (UID: \"df6a154c-b04d-43ff-bd3e-fc2cac82373c\") " pod="openstack/keystone-cron-29496001-zcfch" Jan 30 08:01:00 crc kubenswrapper[4520]: I0130 08:01:00.376534 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df6a154c-b04d-43ff-bd3e-fc2cac82373c-config-data\") pod \"keystone-cron-29496001-zcfch\" (UID: \"df6a154c-b04d-43ff-bd3e-fc2cac82373c\") " pod="openstack/keystone-cron-29496001-zcfch" Jan 30 08:01:00 crc kubenswrapper[4520]: I0130 08:01:00.385733 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df6a154c-b04d-43ff-bd3e-fc2cac82373c-config-data\") pod \"keystone-cron-29496001-zcfch\" (UID: \"df6a154c-b04d-43ff-bd3e-fc2cac82373c\") " pod="openstack/keystone-cron-29496001-zcfch" Jan 30 08:01:00 crc kubenswrapper[4520]: I0130 08:01:00.387096 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df6a154c-b04d-43ff-bd3e-fc2cac82373c-combined-ca-bundle\") pod \"keystone-cron-29496001-zcfch\" (UID: \"df6a154c-b04d-43ff-bd3e-fc2cac82373c\") " pod="openstack/keystone-cron-29496001-zcfch" Jan 30 08:01:00 crc kubenswrapper[4520]: I0130 08:01:00.400821 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df6a154c-b04d-43ff-bd3e-fc2cac82373c-fernet-keys\") pod \"keystone-cron-29496001-zcfch\" (UID: \"df6a154c-b04d-43ff-bd3e-fc2cac82373c\") " pod="openstack/keystone-cron-29496001-zcfch" Jan 30 08:01:00 crc kubenswrapper[4520]: I0130 08:01:00.412259 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpt47\" (UniqueName: \"kubernetes.io/projected/df6a154c-b04d-43ff-bd3e-fc2cac82373c-kube-api-access-vpt47\") pod \"keystone-cron-29496001-zcfch\" (UID: \"df6a154c-b04d-43ff-bd3e-fc2cac82373c\") " pod="openstack/keystone-cron-29496001-zcfch" Jan 30 08:01:00 crc kubenswrapper[4520]: I0130 08:01:00.458948 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496001-zcfch" Jan 30 08:01:00 crc kubenswrapper[4520]: I0130 08:01:00.970051 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496001-zcfch"] Jan 30 08:01:01 crc kubenswrapper[4520]: I0130 08:01:01.265191 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496001-zcfch" event={"ID":"df6a154c-b04d-43ff-bd3e-fc2cac82373c","Type":"ContainerStarted","Data":"f9ad60913ef9ab6d6a3bf4a6421e7f3d0402c54445bb6249d57d970a0bb9d4ce"} Jan 30 08:01:01 crc kubenswrapper[4520]: I0130 08:01:01.265667 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496001-zcfch" event={"ID":"df6a154c-b04d-43ff-bd3e-fc2cac82373c","Type":"ContainerStarted","Data":"114617f01248213074e2e32b2fc5418b1159b942e55854eb8cce722e69182f67"} Jan 30 08:01:01 crc kubenswrapper[4520]: I0130 08:01:01.288001 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29496001-zcfch" podStartSLOduration=1.287982413 podStartE2EDuration="1.287982413s" podCreationTimestamp="2026-01-30 08:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:01:01.279733063 +0000 UTC m=+4574.908085244" watchObservedRunningTime="2026-01-30 08:01:01.287982413 +0000 UTC m=+4574.916334595" Jan 30 08:01:04 crc kubenswrapper[4520]: I0130 08:01:04.291139 4520 generic.go:334] "Generic (PLEG): container finished" podID="df6a154c-b04d-43ff-bd3e-fc2cac82373c" containerID="f9ad60913ef9ab6d6a3bf4a6421e7f3d0402c54445bb6249d57d970a0bb9d4ce" exitCode=0 Jan 30 08:01:04 crc kubenswrapper[4520]: I0130 08:01:04.291217 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496001-zcfch" event={"ID":"df6a154c-b04d-43ff-bd3e-fc2cac82373c","Type":"ContainerDied","Data":"f9ad60913ef9ab6d6a3bf4a6421e7f3d0402c54445bb6249d57d970a0bb9d4ce"} Jan 30 08:01:05 crc kubenswrapper[4520]: I0130 08:01:05.637764 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496001-zcfch" Jan 30 08:01:05 crc kubenswrapper[4520]: I0130 08:01:05.692923 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpt47\" (UniqueName: \"kubernetes.io/projected/df6a154c-b04d-43ff-bd3e-fc2cac82373c-kube-api-access-vpt47\") pod \"df6a154c-b04d-43ff-bd3e-fc2cac82373c\" (UID: \"df6a154c-b04d-43ff-bd3e-fc2cac82373c\") " Jan 30 08:01:05 crc kubenswrapper[4520]: I0130 08:01:05.692998 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df6a154c-b04d-43ff-bd3e-fc2cac82373c-combined-ca-bundle\") pod \"df6a154c-b04d-43ff-bd3e-fc2cac82373c\" (UID: \"df6a154c-b04d-43ff-bd3e-fc2cac82373c\") " Jan 30 08:01:05 crc kubenswrapper[4520]: I0130 08:01:05.693188 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df6a154c-b04d-43ff-bd3e-fc2cac82373c-fernet-keys\") pod \"df6a154c-b04d-43ff-bd3e-fc2cac82373c\" (UID: \"df6a154c-b04d-43ff-bd3e-fc2cac82373c\") " Jan 30 08:01:05 crc kubenswrapper[4520]: I0130 08:01:05.693353 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df6a154c-b04d-43ff-bd3e-fc2cac82373c-config-data\") pod \"df6a154c-b04d-43ff-bd3e-fc2cac82373c\" (UID: \"df6a154c-b04d-43ff-bd3e-fc2cac82373c\") " Jan 30 08:01:05 crc kubenswrapper[4520]: I0130 08:01:05.699580 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df6a154c-b04d-43ff-bd3e-fc2cac82373c-kube-api-access-vpt47" (OuterVolumeSpecName: "kube-api-access-vpt47") pod "df6a154c-b04d-43ff-bd3e-fc2cac82373c" (UID: "df6a154c-b04d-43ff-bd3e-fc2cac82373c"). InnerVolumeSpecName "kube-api-access-vpt47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:01:05 crc kubenswrapper[4520]: I0130 08:01:05.699897 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df6a154c-b04d-43ff-bd3e-fc2cac82373c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "df6a154c-b04d-43ff-bd3e-fc2cac82373c" (UID: "df6a154c-b04d-43ff-bd3e-fc2cac82373c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:01:05 crc kubenswrapper[4520]: I0130 08:01:05.721353 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df6a154c-b04d-43ff-bd3e-fc2cac82373c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df6a154c-b04d-43ff-bd3e-fc2cac82373c" (UID: "df6a154c-b04d-43ff-bd3e-fc2cac82373c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:01:05 crc kubenswrapper[4520]: I0130 08:01:05.739235 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df6a154c-b04d-43ff-bd3e-fc2cac82373c-config-data" (OuterVolumeSpecName: "config-data") pod "df6a154c-b04d-43ff-bd3e-fc2cac82373c" (UID: "df6a154c-b04d-43ff-bd3e-fc2cac82373c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:01:05 crc kubenswrapper[4520]: I0130 08:01:05.796656 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df6a154c-b04d-43ff-bd3e-fc2cac82373c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:01:05 crc kubenswrapper[4520]: I0130 08:01:05.796680 4520 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df6a154c-b04d-43ff-bd3e-fc2cac82373c-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 08:01:05 crc kubenswrapper[4520]: I0130 08:01:05.796690 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df6a154c-b04d-43ff-bd3e-fc2cac82373c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:01:05 crc kubenswrapper[4520]: I0130 08:01:05.796702 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpt47\" (UniqueName: \"kubernetes.io/projected/df6a154c-b04d-43ff-bd3e-fc2cac82373c-kube-api-access-vpt47\") on node \"crc\" DevicePath \"\"" Jan 30 08:01:06 crc kubenswrapper[4520]: I0130 08:01:06.310649 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496001-zcfch" event={"ID":"df6a154c-b04d-43ff-bd3e-fc2cac82373c","Type":"ContainerDied","Data":"114617f01248213074e2e32b2fc5418b1159b942e55854eb8cce722e69182f67"} Jan 30 08:01:06 crc kubenswrapper[4520]: I0130 08:01:06.310692 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="114617f01248213074e2e32b2fc5418b1159b942e55854eb8cce722e69182f67" Jan 30 08:01:06 crc kubenswrapper[4520]: I0130 08:01:06.310977 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496001-zcfch" Jan 30 08:01:10 crc kubenswrapper[4520]: I0130 08:01:10.686875 4520 scope.go:117] "RemoveContainer" containerID="75e27a74f8e186a71cf9d6ec9744ee261fd1d47cfa9e79349b0e31fc7178aa81" Jan 30 08:01:10 crc kubenswrapper[4520]: E0130 08:01:10.688004 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:01:25 crc kubenswrapper[4520]: I0130 08:01:25.685945 4520 scope.go:117] "RemoveContainer" containerID="75e27a74f8e186a71cf9d6ec9744ee261fd1d47cfa9e79349b0e31fc7178aa81" Jan 30 08:01:25 crc kubenswrapper[4520]: E0130 08:01:25.686810 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:01:40 crc kubenswrapper[4520]: I0130 08:01:40.691243 4520 scope.go:117] "RemoveContainer" containerID="75e27a74f8e186a71cf9d6ec9744ee261fd1d47cfa9e79349b0e31fc7178aa81" Jan 30 08:01:40 crc kubenswrapper[4520]: E0130 08:01:40.692789 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:01:52 crc kubenswrapper[4520]: I0130 08:01:52.686636 4520 scope.go:117] "RemoveContainer" containerID="75e27a74f8e186a71cf9d6ec9744ee261fd1d47cfa9e79349b0e31fc7178aa81" Jan 30 08:01:52 crc kubenswrapper[4520]: E0130 08:01:52.688973 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:02:03 crc kubenswrapper[4520]: I0130 08:02:03.687164 4520 scope.go:117] "RemoveContainer" containerID="75e27a74f8e186a71cf9d6ec9744ee261fd1d47cfa9e79349b0e31fc7178aa81" Jan 30 08:02:03 crc kubenswrapper[4520]: E0130 08:02:03.687842 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:02:15 crc kubenswrapper[4520]: I0130 08:02:15.686455 4520 scope.go:117] "RemoveContainer" containerID="75e27a74f8e186a71cf9d6ec9744ee261fd1d47cfa9e79349b0e31fc7178aa81" Jan 30 08:02:15 crc kubenswrapper[4520]: E0130 08:02:15.687074 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:02:29 crc kubenswrapper[4520]: I0130 08:02:29.686007 4520 scope.go:117] "RemoveContainer" containerID="75e27a74f8e186a71cf9d6ec9744ee261fd1d47cfa9e79349b0e31fc7178aa81" Jan 30 08:02:29 crc kubenswrapper[4520]: E0130 08:02:29.686700 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:02:44 crc kubenswrapper[4520]: I0130 08:02:44.686011 4520 scope.go:117] "RemoveContainer" containerID="75e27a74f8e186a71cf9d6ec9744ee261fd1d47cfa9e79349b0e31fc7178aa81" Jan 30 08:02:44 crc kubenswrapper[4520]: E0130 08:02:44.686723 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:02:56 crc kubenswrapper[4520]: I0130 08:02:56.692781 4520 scope.go:117] "RemoveContainer" containerID="75e27a74f8e186a71cf9d6ec9744ee261fd1d47cfa9e79349b0e31fc7178aa81" Jan 30 08:02:56 crc kubenswrapper[4520]: E0130 08:02:56.693777 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:03:11 crc kubenswrapper[4520]: I0130 08:03:11.686127 4520 scope.go:117] "RemoveContainer" containerID="75e27a74f8e186a71cf9d6ec9744ee261fd1d47cfa9e79349b0e31fc7178aa81" Jan 30 08:03:11 crc kubenswrapper[4520]: E0130 08:03:11.687059 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:03:25 crc kubenswrapper[4520]: I0130 08:03:25.685943 4520 scope.go:117] "RemoveContainer" containerID="75e27a74f8e186a71cf9d6ec9744ee261fd1d47cfa9e79349b0e31fc7178aa81" Jan 30 08:03:25 crc kubenswrapper[4520]: E0130 08:03:25.686824 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:03:38 crc kubenswrapper[4520]: I0130 08:03:38.685294 4520 scope.go:117] "RemoveContainer" containerID="75e27a74f8e186a71cf9d6ec9744ee261fd1d47cfa9e79349b0e31fc7178aa81" Jan 30 08:03:38 crc kubenswrapper[4520]: E0130 08:03:38.686154 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:03:50 crc kubenswrapper[4520]: I0130 08:03:50.686268 4520 scope.go:117] "RemoveContainer" containerID="75e27a74f8e186a71cf9d6ec9744ee261fd1d47cfa9e79349b0e31fc7178aa81" Jan 30 08:03:50 crc kubenswrapper[4520]: E0130 08:03:50.687251 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:04:01 crc kubenswrapper[4520]: I0130 08:04:01.685553 4520 scope.go:117] "RemoveContainer" containerID="75e27a74f8e186a71cf9d6ec9744ee261fd1d47cfa9e79349b0e31fc7178aa81" Jan 30 08:04:01 crc kubenswrapper[4520]: E0130 08:04:01.686385 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:04:15 crc kubenswrapper[4520]: I0130 08:04:15.685876 4520 scope.go:117] "RemoveContainer" containerID="75e27a74f8e186a71cf9d6ec9744ee261fd1d47cfa9e79349b0e31fc7178aa81" Jan 30 08:04:15 crc kubenswrapper[4520]: E0130 08:04:15.686659 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:04:27 crc kubenswrapper[4520]: I0130 08:04:27.687186 4520 scope.go:117] "RemoveContainer" containerID="75e27a74f8e186a71cf9d6ec9744ee261fd1d47cfa9e79349b0e31fc7178aa81" Jan 30 08:04:27 crc kubenswrapper[4520]: E0130 08:04:27.688259 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:04:40 crc kubenswrapper[4520]: I0130 08:04:40.685813 4520 scope.go:117] "RemoveContainer" containerID="75e27a74f8e186a71cf9d6ec9744ee261fd1d47cfa9e79349b0e31fc7178aa81" Jan 30 08:04:40 crc kubenswrapper[4520]: E0130 08:04:40.686701 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:04:55 crc kubenswrapper[4520]: I0130 08:04:55.685848 4520 scope.go:117] "RemoveContainer" containerID="75e27a74f8e186a71cf9d6ec9744ee261fd1d47cfa9e79349b0e31fc7178aa81" Jan 30 08:04:55 crc kubenswrapper[4520]: E0130 08:04:55.686803 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:05:08 crc kubenswrapper[4520]: I0130 08:05:08.686176 4520 scope.go:117] "RemoveContainer" containerID="75e27a74f8e186a71cf9d6ec9744ee261fd1d47cfa9e79349b0e31fc7178aa81" Jan 30 08:05:08 crc kubenswrapper[4520]: E0130 08:05:08.687181 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:05:20 crc kubenswrapper[4520]: I0130 08:05:20.686269 4520 scope.go:117] "RemoveContainer" containerID="75e27a74f8e186a71cf9d6ec9744ee261fd1d47cfa9e79349b0e31fc7178aa81" Jan 30 08:05:20 crc kubenswrapper[4520]: E0130 08:05:20.687166 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:05:33 crc kubenswrapper[4520]: I0130 08:05:33.686409 4520 scope.go:117] "RemoveContainer" containerID="75e27a74f8e186a71cf9d6ec9744ee261fd1d47cfa9e79349b0e31fc7178aa81" Jan 30 08:05:33 crc kubenswrapper[4520]: E0130 08:05:33.687281 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:05:46 crc kubenswrapper[4520]: I0130 08:05:46.691300 4520 scope.go:117] "RemoveContainer" containerID="75e27a74f8e186a71cf9d6ec9744ee261fd1d47cfa9e79349b0e31fc7178aa81" Jan 30 08:05:46 crc kubenswrapper[4520]: E0130 08:05:46.692302 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:05:58 crc kubenswrapper[4520]: I0130 08:05:58.688819 4520 scope.go:117] "RemoveContainer" containerID="75e27a74f8e186a71cf9d6ec9744ee261fd1d47cfa9e79349b0e31fc7178aa81" Jan 30 08:05:59 crc kubenswrapper[4520]: I0130 08:05:59.858973 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerStarted","Data":"f9818f2ae98bacfa393a4d8ebbb4fe38ee7080bf2654b52b1a9420fcf14b28e3"} Jan 30 08:07:09 crc kubenswrapper[4520]: I0130 08:07:09.323400 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mqzvd"] Jan 30 08:07:09 crc kubenswrapper[4520]: E0130 08:07:09.324672 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df6a154c-b04d-43ff-bd3e-fc2cac82373c" containerName="keystone-cron" Jan 30 08:07:09 crc kubenswrapper[4520]: I0130 08:07:09.324691 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="df6a154c-b04d-43ff-bd3e-fc2cac82373c" containerName="keystone-cron" Jan 30 08:07:09 crc kubenswrapper[4520]: I0130 08:07:09.324905 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="df6a154c-b04d-43ff-bd3e-fc2cac82373c" containerName="keystone-cron" Jan 30 08:07:09 crc kubenswrapper[4520]: I0130 08:07:09.327033 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mqzvd" Jan 30 08:07:09 crc kubenswrapper[4520]: I0130 08:07:09.348132 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mqzvd"] Jan 30 08:07:09 crc kubenswrapper[4520]: I0130 08:07:09.472792 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481f8c22-6e89-4093-b22e-428a31a476c9-catalog-content\") pod \"redhat-marketplace-mqzvd\" (UID: \"481f8c22-6e89-4093-b22e-428a31a476c9\") " pod="openshift-marketplace/redhat-marketplace-mqzvd" Jan 30 08:07:09 crc kubenswrapper[4520]: I0130 08:07:09.472914 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481f8c22-6e89-4093-b22e-428a31a476c9-utilities\") pod \"redhat-marketplace-mqzvd\" (UID: \"481f8c22-6e89-4093-b22e-428a31a476c9\") " pod="openshift-marketplace/redhat-marketplace-mqzvd" Jan 30 08:07:09 crc kubenswrapper[4520]: I0130 08:07:09.473043 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlj2w\" (UniqueName: \"kubernetes.io/projected/481f8c22-6e89-4093-b22e-428a31a476c9-kube-api-access-jlj2w\") pod \"redhat-marketplace-mqzvd\" (UID: \"481f8c22-6e89-4093-b22e-428a31a476c9\") " pod="openshift-marketplace/redhat-marketplace-mqzvd" Jan 30 08:07:09 crc kubenswrapper[4520]: I0130 08:07:09.575855 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlj2w\" (UniqueName: \"kubernetes.io/projected/481f8c22-6e89-4093-b22e-428a31a476c9-kube-api-access-jlj2w\") pod \"redhat-marketplace-mqzvd\" (UID: \"481f8c22-6e89-4093-b22e-428a31a476c9\") " pod="openshift-marketplace/redhat-marketplace-mqzvd" Jan 30 08:07:09 crc kubenswrapper[4520]: I0130 08:07:09.576592 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481f8c22-6e89-4093-b22e-428a31a476c9-catalog-content\") pod \"redhat-marketplace-mqzvd\" (UID: \"481f8c22-6e89-4093-b22e-428a31a476c9\") " pod="openshift-marketplace/redhat-marketplace-mqzvd" Jan 30 08:07:09 crc kubenswrapper[4520]: I0130 08:07:09.576876 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481f8c22-6e89-4093-b22e-428a31a476c9-utilities\") pod \"redhat-marketplace-mqzvd\" (UID: \"481f8c22-6e89-4093-b22e-428a31a476c9\") " pod="openshift-marketplace/redhat-marketplace-mqzvd" Jan 30 08:07:09 crc kubenswrapper[4520]: I0130 08:07:09.578700 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481f8c22-6e89-4093-b22e-428a31a476c9-utilities\") pod \"redhat-marketplace-mqzvd\" (UID: \"481f8c22-6e89-4093-b22e-428a31a476c9\") " pod="openshift-marketplace/redhat-marketplace-mqzvd" Jan 30 08:07:09 crc kubenswrapper[4520]: I0130 08:07:09.579250 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481f8c22-6e89-4093-b22e-428a31a476c9-catalog-content\") pod \"redhat-marketplace-mqzvd\" (UID: \"481f8c22-6e89-4093-b22e-428a31a476c9\") " pod="openshift-marketplace/redhat-marketplace-mqzvd" Jan 30 08:07:09 crc kubenswrapper[4520]: I0130 08:07:09.596603 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlj2w\" (UniqueName: \"kubernetes.io/projected/481f8c22-6e89-4093-b22e-428a31a476c9-kube-api-access-jlj2w\") pod \"redhat-marketplace-mqzvd\" (UID: \"481f8c22-6e89-4093-b22e-428a31a476c9\") " pod="openshift-marketplace/redhat-marketplace-mqzvd" Jan 30 08:07:09 crc kubenswrapper[4520]: I0130 08:07:09.657801 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mqzvd" Jan 30 08:07:10 crc kubenswrapper[4520]: I0130 08:07:10.253073 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mqzvd"] Jan 30 08:07:10 crc kubenswrapper[4520]: I0130 08:07:10.431995 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mqzvd" event={"ID":"481f8c22-6e89-4093-b22e-428a31a476c9","Type":"ContainerStarted","Data":"e4181a1573285152a26d1562982ce7bd376edd99bdec4363be32ed2459a804cd"} Jan 30 08:07:11 crc kubenswrapper[4520]: I0130 08:07:11.442138 4520 generic.go:334] "Generic (PLEG): container finished" podID="481f8c22-6e89-4093-b22e-428a31a476c9" containerID="6dbbde0ab6c051495a250f8335b55f63263cc2ee4679194dbe2b77557230ccc7" exitCode=0 Jan 30 08:07:11 crc kubenswrapper[4520]: I0130 08:07:11.442259 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mqzvd" event={"ID":"481f8c22-6e89-4093-b22e-428a31a476c9","Type":"ContainerDied","Data":"6dbbde0ab6c051495a250f8335b55f63263cc2ee4679194dbe2b77557230ccc7"} Jan 30 08:07:11 crc kubenswrapper[4520]: I0130 08:07:11.444077 4520 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 08:07:12 crc kubenswrapper[4520]: I0130 08:07:12.451788 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mqzvd" event={"ID":"481f8c22-6e89-4093-b22e-428a31a476c9","Type":"ContainerStarted","Data":"aba7defa888933e4121682a0806d764ade3cedbd7d736cdef2cb2a06bb228926"} Jan 30 08:07:13 crc kubenswrapper[4520]: I0130 08:07:13.461309 4520 generic.go:334] "Generic (PLEG): container finished" podID="481f8c22-6e89-4093-b22e-428a31a476c9" containerID="aba7defa888933e4121682a0806d764ade3cedbd7d736cdef2cb2a06bb228926" exitCode=0 Jan 30 08:07:13 crc kubenswrapper[4520]: I0130 08:07:13.461359 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mqzvd" event={"ID":"481f8c22-6e89-4093-b22e-428a31a476c9","Type":"ContainerDied","Data":"aba7defa888933e4121682a0806d764ade3cedbd7d736cdef2cb2a06bb228926"} Jan 30 08:07:14 crc kubenswrapper[4520]: I0130 08:07:14.470350 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mqzvd" event={"ID":"481f8c22-6e89-4093-b22e-428a31a476c9","Type":"ContainerStarted","Data":"6d7e4ac040bf09bba1f1de156514dd3e63da79d3b36026280322372e70f1a639"} Jan 30 08:07:14 crc kubenswrapper[4520]: I0130 08:07:14.495046 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mqzvd" podStartSLOduration=3.009315957 podStartE2EDuration="5.494197448s" podCreationTimestamp="2026-01-30 08:07:09 +0000 UTC" firstStartedPulling="2026-01-30 08:07:11.443833239 +0000 UTC m=+4945.072185411" lastFinishedPulling="2026-01-30 08:07:13.928714721 +0000 UTC m=+4947.557066902" observedRunningTime="2026-01-30 08:07:14.485406451 +0000 UTC m=+4948.113758631" watchObservedRunningTime="2026-01-30 08:07:14.494197448 +0000 UTC m=+4948.122549629" Jan 30 08:07:19 crc kubenswrapper[4520]: I0130 08:07:19.658695 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mqzvd" Jan 30 08:07:19 crc kubenswrapper[4520]: I0130 08:07:19.659035 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mqzvd" Jan 30 08:07:19 crc kubenswrapper[4520]: I0130 08:07:19.702688 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mqzvd" Jan 30 08:07:20 crc kubenswrapper[4520]: I0130 08:07:20.568238 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mqzvd" Jan 30 08:07:20 crc kubenswrapper[4520]: I0130 08:07:20.609991 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mqzvd"] Jan 30 08:07:22 crc kubenswrapper[4520]: I0130 08:07:22.543319 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mqzvd" podUID="481f8c22-6e89-4093-b22e-428a31a476c9" containerName="registry-server" containerID="cri-o://6d7e4ac040bf09bba1f1de156514dd3e63da79d3b36026280322372e70f1a639" gracePeriod=2 Jan 30 08:07:23 crc kubenswrapper[4520]: I0130 08:07:23.065416 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mqzvd" Jan 30 08:07:23 crc kubenswrapper[4520]: I0130 08:07:23.179356 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481f8c22-6e89-4093-b22e-428a31a476c9-catalog-content\") pod \"481f8c22-6e89-4093-b22e-428a31a476c9\" (UID: \"481f8c22-6e89-4093-b22e-428a31a476c9\") " Jan 30 08:07:23 crc kubenswrapper[4520]: I0130 08:07:23.179411 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlj2w\" (UniqueName: \"kubernetes.io/projected/481f8c22-6e89-4093-b22e-428a31a476c9-kube-api-access-jlj2w\") pod \"481f8c22-6e89-4093-b22e-428a31a476c9\" (UID: \"481f8c22-6e89-4093-b22e-428a31a476c9\") " Jan 30 08:07:23 crc kubenswrapper[4520]: I0130 08:07:23.179710 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481f8c22-6e89-4093-b22e-428a31a476c9-utilities\") pod \"481f8c22-6e89-4093-b22e-428a31a476c9\" (UID: \"481f8c22-6e89-4093-b22e-428a31a476c9\") " Jan 30 08:07:23 crc kubenswrapper[4520]: I0130 08:07:23.180835 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/481f8c22-6e89-4093-b22e-428a31a476c9-utilities" (OuterVolumeSpecName: "utilities") pod "481f8c22-6e89-4093-b22e-428a31a476c9" (UID: "481f8c22-6e89-4093-b22e-428a31a476c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:07:23 crc kubenswrapper[4520]: I0130 08:07:23.192783 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/481f8c22-6e89-4093-b22e-428a31a476c9-kube-api-access-jlj2w" (OuterVolumeSpecName: "kube-api-access-jlj2w") pod "481f8c22-6e89-4093-b22e-428a31a476c9" (UID: "481f8c22-6e89-4093-b22e-428a31a476c9"). InnerVolumeSpecName "kube-api-access-jlj2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:07:23 crc kubenswrapper[4520]: I0130 08:07:23.199397 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/481f8c22-6e89-4093-b22e-428a31a476c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "481f8c22-6e89-4093-b22e-428a31a476c9" (UID: "481f8c22-6e89-4093-b22e-428a31a476c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:07:23 crc kubenswrapper[4520]: I0130 08:07:23.281483 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481f8c22-6e89-4093-b22e-428a31a476c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:07:23 crc kubenswrapper[4520]: I0130 08:07:23.281527 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlj2w\" (UniqueName: \"kubernetes.io/projected/481f8c22-6e89-4093-b22e-428a31a476c9-kube-api-access-jlj2w\") on node \"crc\" DevicePath \"\"" Jan 30 08:07:23 crc kubenswrapper[4520]: I0130 08:07:23.281539 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481f8c22-6e89-4093-b22e-428a31a476c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:07:23 crc kubenswrapper[4520]: I0130 08:07:23.551852 4520 generic.go:334] "Generic (PLEG): container finished" podID="481f8c22-6e89-4093-b22e-428a31a476c9" containerID="6d7e4ac040bf09bba1f1de156514dd3e63da79d3b36026280322372e70f1a639" exitCode=0 Jan 30 08:07:23 crc kubenswrapper[4520]: I0130 08:07:23.552048 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mqzvd" event={"ID":"481f8c22-6e89-4093-b22e-428a31a476c9","Type":"ContainerDied","Data":"6d7e4ac040bf09bba1f1de156514dd3e63da79d3b36026280322372e70f1a639"} Jan 30 08:07:23 crc kubenswrapper[4520]: I0130 08:07:23.552146 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mqzvd" Jan 30 08:07:23 crc kubenswrapper[4520]: I0130 08:07:23.552170 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mqzvd" event={"ID":"481f8c22-6e89-4093-b22e-428a31a476c9","Type":"ContainerDied","Data":"e4181a1573285152a26d1562982ce7bd376edd99bdec4363be32ed2459a804cd"} Jan 30 08:07:23 crc kubenswrapper[4520]: I0130 08:07:23.552214 4520 scope.go:117] "RemoveContainer" containerID="6d7e4ac040bf09bba1f1de156514dd3e63da79d3b36026280322372e70f1a639" Jan 30 08:07:23 crc kubenswrapper[4520]: I0130 08:07:23.572051 4520 scope.go:117] "RemoveContainer" containerID="aba7defa888933e4121682a0806d764ade3cedbd7d736cdef2cb2a06bb228926" Jan 30 08:07:23 crc kubenswrapper[4520]: I0130 08:07:23.595272 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mqzvd"] Jan 30 08:07:23 crc kubenswrapper[4520]: I0130 08:07:23.601186 4520 scope.go:117] "RemoveContainer" containerID="6dbbde0ab6c051495a250f8335b55f63263cc2ee4679194dbe2b77557230ccc7" Jan 30 08:07:23 crc kubenswrapper[4520]: I0130 08:07:23.603606 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mqzvd"] Jan 30 08:07:23 crc kubenswrapper[4520]: I0130 08:07:23.634531 4520 scope.go:117] "RemoveContainer" containerID="6d7e4ac040bf09bba1f1de156514dd3e63da79d3b36026280322372e70f1a639" Jan 30 08:07:23 crc kubenswrapper[4520]: E0130 08:07:23.637215 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d7e4ac040bf09bba1f1de156514dd3e63da79d3b36026280322372e70f1a639\": container with ID starting with 6d7e4ac040bf09bba1f1de156514dd3e63da79d3b36026280322372e70f1a639 not found: ID does not exist" containerID="6d7e4ac040bf09bba1f1de156514dd3e63da79d3b36026280322372e70f1a639" Jan 30 08:07:23 crc kubenswrapper[4520]: I0130 08:07:23.637258 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d7e4ac040bf09bba1f1de156514dd3e63da79d3b36026280322372e70f1a639"} err="failed to get container status \"6d7e4ac040bf09bba1f1de156514dd3e63da79d3b36026280322372e70f1a639\": rpc error: code = NotFound desc = could not find container \"6d7e4ac040bf09bba1f1de156514dd3e63da79d3b36026280322372e70f1a639\": container with ID starting with 6d7e4ac040bf09bba1f1de156514dd3e63da79d3b36026280322372e70f1a639 not found: ID does not exist" Jan 30 08:07:23 crc kubenswrapper[4520]: I0130 08:07:23.637294 4520 scope.go:117] "RemoveContainer" containerID="aba7defa888933e4121682a0806d764ade3cedbd7d736cdef2cb2a06bb228926" Jan 30 08:07:23 crc kubenswrapper[4520]: E0130 08:07:23.638367 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aba7defa888933e4121682a0806d764ade3cedbd7d736cdef2cb2a06bb228926\": container with ID starting with aba7defa888933e4121682a0806d764ade3cedbd7d736cdef2cb2a06bb228926 not found: ID does not exist" containerID="aba7defa888933e4121682a0806d764ade3cedbd7d736cdef2cb2a06bb228926" Jan 30 08:07:23 crc kubenswrapper[4520]: I0130 08:07:23.638399 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aba7defa888933e4121682a0806d764ade3cedbd7d736cdef2cb2a06bb228926"} err="failed to get container status \"aba7defa888933e4121682a0806d764ade3cedbd7d736cdef2cb2a06bb228926\": rpc error: code = NotFound desc = could not find container \"aba7defa888933e4121682a0806d764ade3cedbd7d736cdef2cb2a06bb228926\": container with ID starting with aba7defa888933e4121682a0806d764ade3cedbd7d736cdef2cb2a06bb228926 not found: ID does not exist" Jan 30 08:07:23 crc kubenswrapper[4520]: I0130 08:07:23.638424 4520 scope.go:117] "RemoveContainer" containerID="6dbbde0ab6c051495a250f8335b55f63263cc2ee4679194dbe2b77557230ccc7" Jan 30 08:07:23 crc kubenswrapper[4520]: E0130 08:07:23.638711 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dbbde0ab6c051495a250f8335b55f63263cc2ee4679194dbe2b77557230ccc7\": container with ID starting with 6dbbde0ab6c051495a250f8335b55f63263cc2ee4679194dbe2b77557230ccc7 not found: ID does not exist" containerID="6dbbde0ab6c051495a250f8335b55f63263cc2ee4679194dbe2b77557230ccc7" Jan 30 08:07:23 crc kubenswrapper[4520]: I0130 08:07:23.638739 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dbbde0ab6c051495a250f8335b55f63263cc2ee4679194dbe2b77557230ccc7"} err="failed to get container status \"6dbbde0ab6c051495a250f8335b55f63263cc2ee4679194dbe2b77557230ccc7\": rpc error: code = NotFound desc = could not find container \"6dbbde0ab6c051495a250f8335b55f63263cc2ee4679194dbe2b77557230ccc7\": container with ID starting with 6dbbde0ab6c051495a250f8335b55f63263cc2ee4679194dbe2b77557230ccc7 not found: ID does not exist" Jan 30 08:07:24 crc kubenswrapper[4520]: I0130 08:07:24.697990 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="481f8c22-6e89-4093-b22e-428a31a476c9" path="/var/lib/kubelet/pods/481f8c22-6e89-4093-b22e-428a31a476c9/volumes" Jan 30 08:07:34 crc kubenswrapper[4520]: I0130 08:07:34.470104 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2mdwz"] Jan 30 08:07:34 crc kubenswrapper[4520]: E0130 08:07:34.472237 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481f8c22-6e89-4093-b22e-428a31a476c9" containerName="registry-server" Jan 30 08:07:34 crc kubenswrapper[4520]: I0130 08:07:34.472312 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="481f8c22-6e89-4093-b22e-428a31a476c9" containerName="registry-server" Jan 30 08:07:34 crc kubenswrapper[4520]: E0130 08:07:34.472390 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481f8c22-6e89-4093-b22e-428a31a476c9" containerName="extract-utilities" Jan 30 08:07:34 crc kubenswrapper[4520]: I0130 08:07:34.472450 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="481f8c22-6e89-4093-b22e-428a31a476c9" containerName="extract-utilities" Jan 30 08:07:34 crc kubenswrapper[4520]: E0130 08:07:34.472526 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481f8c22-6e89-4093-b22e-428a31a476c9" containerName="extract-content" Jan 30 08:07:34 crc kubenswrapper[4520]: I0130 08:07:34.472588 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="481f8c22-6e89-4093-b22e-428a31a476c9" containerName="extract-content" Jan 30 08:07:34 crc kubenswrapper[4520]: I0130 08:07:34.472892 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="481f8c22-6e89-4093-b22e-428a31a476c9" containerName="registry-server" Jan 30 08:07:34 crc kubenswrapper[4520]: I0130 08:07:34.477672 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2mdwz" Jan 30 08:07:34 crc kubenswrapper[4520]: I0130 08:07:34.496232 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2mdwz"] Jan 30 08:07:34 crc kubenswrapper[4520]: I0130 08:07:34.585011 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24eb2753-e850-4ecd-a84e-51cc05dd6be4-utilities\") pod \"redhat-operators-2mdwz\" (UID: \"24eb2753-e850-4ecd-a84e-51cc05dd6be4\") " pod="openshift-marketplace/redhat-operators-2mdwz" Jan 30 08:07:34 crc kubenswrapper[4520]: I0130 08:07:34.585379 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4vsn\" (UniqueName: \"kubernetes.io/projected/24eb2753-e850-4ecd-a84e-51cc05dd6be4-kube-api-access-k4vsn\") pod \"redhat-operators-2mdwz\" (UID: \"24eb2753-e850-4ecd-a84e-51cc05dd6be4\") " pod="openshift-marketplace/redhat-operators-2mdwz" Jan 30 08:07:34 crc kubenswrapper[4520]: I0130 08:07:34.585711 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24eb2753-e850-4ecd-a84e-51cc05dd6be4-catalog-content\") pod \"redhat-operators-2mdwz\" (UID: \"24eb2753-e850-4ecd-a84e-51cc05dd6be4\") " pod="openshift-marketplace/redhat-operators-2mdwz" Jan 30 08:07:34 crc kubenswrapper[4520]: I0130 08:07:34.688185 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24eb2753-e850-4ecd-a84e-51cc05dd6be4-catalog-content\") pod \"redhat-operators-2mdwz\" (UID: \"24eb2753-e850-4ecd-a84e-51cc05dd6be4\") " pod="openshift-marketplace/redhat-operators-2mdwz" Jan 30 08:07:34 crc kubenswrapper[4520]: I0130 08:07:34.688286 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24eb2753-e850-4ecd-a84e-51cc05dd6be4-utilities\") pod \"redhat-operators-2mdwz\" (UID: \"24eb2753-e850-4ecd-a84e-51cc05dd6be4\") " pod="openshift-marketplace/redhat-operators-2mdwz" Jan 30 08:07:34 crc kubenswrapper[4520]: I0130 08:07:34.688363 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4vsn\" (UniqueName: \"kubernetes.io/projected/24eb2753-e850-4ecd-a84e-51cc05dd6be4-kube-api-access-k4vsn\") pod \"redhat-operators-2mdwz\" (UID: \"24eb2753-e850-4ecd-a84e-51cc05dd6be4\") " pod="openshift-marketplace/redhat-operators-2mdwz" Jan 30 08:07:34 crc kubenswrapper[4520]: I0130 08:07:34.688800 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24eb2753-e850-4ecd-a84e-51cc05dd6be4-catalog-content\") pod \"redhat-operators-2mdwz\" (UID: \"24eb2753-e850-4ecd-a84e-51cc05dd6be4\") " pod="openshift-marketplace/redhat-operators-2mdwz" Jan 30 08:07:34 crc kubenswrapper[4520]: I0130 08:07:34.689018 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24eb2753-e850-4ecd-a84e-51cc05dd6be4-utilities\") pod \"redhat-operators-2mdwz\" (UID: \"24eb2753-e850-4ecd-a84e-51cc05dd6be4\") " pod="openshift-marketplace/redhat-operators-2mdwz" Jan 30 08:07:34 crc kubenswrapper[4520]: I0130 08:07:34.708991 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4vsn\" (UniqueName: \"kubernetes.io/projected/24eb2753-e850-4ecd-a84e-51cc05dd6be4-kube-api-access-k4vsn\") pod \"redhat-operators-2mdwz\" (UID: \"24eb2753-e850-4ecd-a84e-51cc05dd6be4\") " pod="openshift-marketplace/redhat-operators-2mdwz" Jan 30 08:07:34 crc kubenswrapper[4520]: I0130 08:07:34.799260 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2mdwz" Jan 30 08:07:35 crc kubenswrapper[4520]: I0130 08:07:35.278995 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wbvbs"] Jan 30 08:07:35 crc kubenswrapper[4520]: I0130 08:07:35.281489 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wbvbs" Jan 30 08:07:35 crc kubenswrapper[4520]: I0130 08:07:35.291229 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wbvbs"] Jan 30 08:07:35 crc kubenswrapper[4520]: I0130 08:07:35.302536 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b72e5ed5-8856-47f1-ba20-36360806a587-utilities\") pod \"certified-operators-wbvbs\" (UID: \"b72e5ed5-8856-47f1-ba20-36360806a587\") " pod="openshift-marketplace/certified-operators-wbvbs" Jan 30 08:07:35 crc kubenswrapper[4520]: I0130 08:07:35.302690 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl7tn\" (UniqueName: \"kubernetes.io/projected/b72e5ed5-8856-47f1-ba20-36360806a587-kube-api-access-jl7tn\") pod \"certified-operators-wbvbs\" (UID: \"b72e5ed5-8856-47f1-ba20-36360806a587\") " pod="openshift-marketplace/certified-operators-wbvbs" Jan 30 08:07:35 crc kubenswrapper[4520]: I0130 08:07:35.302825 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b72e5ed5-8856-47f1-ba20-36360806a587-catalog-content\") pod \"certified-operators-wbvbs\" (UID: \"b72e5ed5-8856-47f1-ba20-36360806a587\") " pod="openshift-marketplace/certified-operators-wbvbs" Jan 30 08:07:35 crc kubenswrapper[4520]: I0130 08:07:35.399728 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2mdwz"] Jan 30 08:07:35 crc kubenswrapper[4520]: I0130 08:07:35.413587 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b72e5ed5-8856-47f1-ba20-36360806a587-utilities\") pod \"certified-operators-wbvbs\" (UID: \"b72e5ed5-8856-47f1-ba20-36360806a587\") " pod="openshift-marketplace/certified-operators-wbvbs" Jan 30 08:07:35 crc kubenswrapper[4520]: I0130 08:07:35.414157 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl7tn\" (UniqueName: \"kubernetes.io/projected/b72e5ed5-8856-47f1-ba20-36360806a587-kube-api-access-jl7tn\") pod \"certified-operators-wbvbs\" (UID: \"b72e5ed5-8856-47f1-ba20-36360806a587\") " pod="openshift-marketplace/certified-operators-wbvbs" Jan 30 08:07:35 crc kubenswrapper[4520]: I0130 08:07:35.414243 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b72e5ed5-8856-47f1-ba20-36360806a587-utilities\") pod \"certified-operators-wbvbs\" (UID: \"b72e5ed5-8856-47f1-ba20-36360806a587\") " pod="openshift-marketplace/certified-operators-wbvbs" Jan 30 08:07:35 crc kubenswrapper[4520]: I0130 08:07:35.414263 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b72e5ed5-8856-47f1-ba20-36360806a587-catalog-content\") pod \"certified-operators-wbvbs\" (UID: \"b72e5ed5-8856-47f1-ba20-36360806a587\") " pod="openshift-marketplace/certified-operators-wbvbs" Jan 30 08:07:35 crc kubenswrapper[4520]: I0130 08:07:35.414676 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b72e5ed5-8856-47f1-ba20-36360806a587-catalog-content\") pod \"certified-operators-wbvbs\" (UID: \"b72e5ed5-8856-47f1-ba20-36360806a587\") " pod="openshift-marketplace/certified-operators-wbvbs" Jan 30 08:07:35 crc kubenswrapper[4520]: I0130 08:07:35.435540 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl7tn\" (UniqueName: \"kubernetes.io/projected/b72e5ed5-8856-47f1-ba20-36360806a587-kube-api-access-jl7tn\") pod \"certified-operators-wbvbs\" (UID: \"b72e5ed5-8856-47f1-ba20-36360806a587\") " pod="openshift-marketplace/certified-operators-wbvbs" Jan 30 08:07:35 crc kubenswrapper[4520]: I0130 08:07:35.604342 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wbvbs" Jan 30 08:07:35 crc kubenswrapper[4520]: I0130 08:07:35.650297 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2mdwz" event={"ID":"24eb2753-e850-4ecd-a84e-51cc05dd6be4","Type":"ContainerStarted","Data":"b73c9823ffebc8af5498d6a536c976fea3b4a7b4c3cc037d3ddf93bf11a5e527"} Jan 30 08:07:36 crc kubenswrapper[4520]: I0130 08:07:36.563072 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wbvbs"] Jan 30 08:07:36 crc kubenswrapper[4520]: I0130 08:07:36.676284 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wbvbs" event={"ID":"b72e5ed5-8856-47f1-ba20-36360806a587","Type":"ContainerStarted","Data":"a7bbbe6b4e1b8839ccf2b10c20958926bce4f313e1cfd1ce2799d40651412677"} Jan 30 08:07:36 crc kubenswrapper[4520]: I0130 08:07:36.678722 4520 generic.go:334] "Generic (PLEG): container finished" podID="24eb2753-e850-4ecd-a84e-51cc05dd6be4" containerID="9a8fbcee8169dddeccab69ee14a73abcf30509d2b19a17d9337b49a6225fb472" exitCode=0 Jan 30 08:07:36 crc kubenswrapper[4520]: I0130 08:07:36.678765 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2mdwz" event={"ID":"24eb2753-e850-4ecd-a84e-51cc05dd6be4","Type":"ContainerDied","Data":"9a8fbcee8169dddeccab69ee14a73abcf30509d2b19a17d9337b49a6225fb472"} Jan 30 08:07:37 crc kubenswrapper[4520]: I0130 08:07:37.678712 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sfr5n"] Jan 30 08:07:37 crc kubenswrapper[4520]: I0130 08:07:37.687986 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sfr5n" Jan 30 08:07:37 crc kubenswrapper[4520]: I0130 08:07:37.689934 4520 generic.go:334] "Generic (PLEG): container finished" podID="b72e5ed5-8856-47f1-ba20-36360806a587" containerID="a192741e8d929b74ca3bc25007e8e082c7e2bb1e69037f0a7efc3933242e9277" exitCode=0 Jan 30 08:07:37 crc kubenswrapper[4520]: I0130 08:07:37.689978 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wbvbs" event={"ID":"b72e5ed5-8856-47f1-ba20-36360806a587","Type":"ContainerDied","Data":"a192741e8d929b74ca3bc25007e8e082c7e2bb1e69037f0a7efc3933242e9277"} Jan 30 08:07:37 crc kubenswrapper[4520]: I0130 08:07:37.702063 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sfr5n"] Jan 30 08:07:37 crc kubenswrapper[4520]: I0130 08:07:37.767439 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb38e42e-bbd5-45da-8eda-644fd7930836-catalog-content\") pod \"community-operators-sfr5n\" (UID: \"bb38e42e-bbd5-45da-8eda-644fd7930836\") " pod="openshift-marketplace/community-operators-sfr5n" Jan 30 08:07:37 crc kubenswrapper[4520]: I0130 08:07:37.767484 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb38e42e-bbd5-45da-8eda-644fd7930836-utilities\") pod \"community-operators-sfr5n\" (UID: \"bb38e42e-bbd5-45da-8eda-644fd7930836\") " pod="openshift-marketplace/community-operators-sfr5n" Jan 30 08:07:37 crc kubenswrapper[4520]: I0130 08:07:37.767599 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drhh6\" (UniqueName: \"kubernetes.io/projected/bb38e42e-bbd5-45da-8eda-644fd7930836-kube-api-access-drhh6\") pod \"community-operators-sfr5n\" (UID: \"bb38e42e-bbd5-45da-8eda-644fd7930836\") " pod="openshift-marketplace/community-operators-sfr5n" Jan 30 08:07:37 crc kubenswrapper[4520]: I0130 08:07:37.869473 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb38e42e-bbd5-45da-8eda-644fd7930836-catalog-content\") pod \"community-operators-sfr5n\" (UID: \"bb38e42e-bbd5-45da-8eda-644fd7930836\") " pod="openshift-marketplace/community-operators-sfr5n" Jan 30 08:07:37 crc kubenswrapper[4520]: I0130 08:07:37.869767 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb38e42e-bbd5-45da-8eda-644fd7930836-utilities\") pod \"community-operators-sfr5n\" (UID: \"bb38e42e-bbd5-45da-8eda-644fd7930836\") " pod="openshift-marketplace/community-operators-sfr5n" Jan 30 08:07:37 crc kubenswrapper[4520]: I0130 08:07:37.869817 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drhh6\" (UniqueName: \"kubernetes.io/projected/bb38e42e-bbd5-45da-8eda-644fd7930836-kube-api-access-drhh6\") pod \"community-operators-sfr5n\" (UID: \"bb38e42e-bbd5-45da-8eda-644fd7930836\") " pod="openshift-marketplace/community-operators-sfr5n" Jan 30 08:07:37 crc kubenswrapper[4520]: I0130 08:07:37.869901 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb38e42e-bbd5-45da-8eda-644fd7930836-catalog-content\") pod \"community-operators-sfr5n\" (UID: \"bb38e42e-bbd5-45da-8eda-644fd7930836\") " pod="openshift-marketplace/community-operators-sfr5n" Jan 30 08:07:37 crc kubenswrapper[4520]: I0130 08:07:37.870198 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb38e42e-bbd5-45da-8eda-644fd7930836-utilities\") pod \"community-operators-sfr5n\" (UID: \"bb38e42e-bbd5-45da-8eda-644fd7930836\") " pod="openshift-marketplace/community-operators-sfr5n" Jan 30 08:07:37 crc kubenswrapper[4520]: I0130 08:07:37.890467 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drhh6\" (UniqueName: \"kubernetes.io/projected/bb38e42e-bbd5-45da-8eda-644fd7930836-kube-api-access-drhh6\") pod \"community-operators-sfr5n\" (UID: \"bb38e42e-bbd5-45da-8eda-644fd7930836\") " pod="openshift-marketplace/community-operators-sfr5n" Jan 30 08:07:38 crc kubenswrapper[4520]: I0130 08:07:38.011067 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sfr5n" Jan 30 08:07:38 crc kubenswrapper[4520]: I0130 08:07:38.625477 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sfr5n"] Jan 30 08:07:38 crc kubenswrapper[4520]: I0130 08:07:38.832820 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2mdwz" event={"ID":"24eb2753-e850-4ecd-a84e-51cc05dd6be4","Type":"ContainerStarted","Data":"152c672d00bb1e2a4d5bccf50de32b1a64bb8878144502146945950e9b568cd1"} Jan 30 08:07:38 crc kubenswrapper[4520]: I0130 08:07:38.842591 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sfr5n" event={"ID":"bb38e42e-bbd5-45da-8eda-644fd7930836","Type":"ContainerStarted","Data":"1afc99d67603a14c817e74ea3b93ae0e13d26c0f3f20c7003acc64a5bb9c42a3"} Jan 30 08:07:38 crc kubenswrapper[4520]: I0130 08:07:38.843938 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wbvbs" event={"ID":"b72e5ed5-8856-47f1-ba20-36360806a587","Type":"ContainerStarted","Data":"5a91e6f90b0d913237d23c73cc4dc8592f35d29e85b8313de8539dcf0cbce431"} Jan 30 08:07:39 crc kubenswrapper[4520]: I0130 08:07:39.855718 4520 generic.go:334] "Generic (PLEG): container finished" podID="bb38e42e-bbd5-45da-8eda-644fd7930836" containerID="908d8f3d63c1a168c1610c100f36af9e6b5537d2cc7893fa990d3c7165072c7e" exitCode=0 Jan 30 08:07:39 crc kubenswrapper[4520]: I0130 08:07:39.855803 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sfr5n" event={"ID":"bb38e42e-bbd5-45da-8eda-644fd7930836","Type":"ContainerDied","Data":"908d8f3d63c1a168c1610c100f36af9e6b5537d2cc7893fa990d3c7165072c7e"} Jan 30 08:07:40 crc kubenswrapper[4520]: I0130 08:07:40.874082 4520 generic.go:334] "Generic (PLEG): container finished" podID="b72e5ed5-8856-47f1-ba20-36360806a587" containerID="5a91e6f90b0d913237d23c73cc4dc8592f35d29e85b8313de8539dcf0cbce431" exitCode=0 Jan 30 08:07:40 crc kubenswrapper[4520]: I0130 08:07:40.874130 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wbvbs" event={"ID":"b72e5ed5-8856-47f1-ba20-36360806a587","Type":"ContainerDied","Data":"5a91e6f90b0d913237d23c73cc4dc8592f35d29e85b8313de8539dcf0cbce431"} Jan 30 08:07:40 crc kubenswrapper[4520]: I0130 08:07:40.880276 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sfr5n" event={"ID":"bb38e42e-bbd5-45da-8eda-644fd7930836","Type":"ContainerStarted","Data":"a4c38014ef53e890b8754310b7653fc304f94ab0d0fb06b954c076c90908b76d"} Jan 30 08:07:41 crc kubenswrapper[4520]: I0130 08:07:41.893547 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2mdwz" event={"ID":"24eb2753-e850-4ecd-a84e-51cc05dd6be4","Type":"ContainerDied","Data":"152c672d00bb1e2a4d5bccf50de32b1a64bb8878144502146945950e9b568cd1"} Jan 30 08:07:41 crc kubenswrapper[4520]: I0130 08:07:41.893550 4520 generic.go:334] "Generic (PLEG): container finished" podID="24eb2753-e850-4ecd-a84e-51cc05dd6be4" containerID="152c672d00bb1e2a4d5bccf50de32b1a64bb8878144502146945950e9b568cd1" exitCode=0 Jan 30 08:07:41 crc kubenswrapper[4520]: I0130 08:07:41.898589 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wbvbs" event={"ID":"b72e5ed5-8856-47f1-ba20-36360806a587","Type":"ContainerStarted","Data":"023f07f1234519dfa227cd0bc00d8ad1f14d57c89de6e5543c78abfe9105eac3"} Jan 30 08:07:41 crc kubenswrapper[4520]: I0130 08:07:41.949920 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wbvbs" podStartSLOduration=3.201776992 podStartE2EDuration="6.949899931s" podCreationTimestamp="2026-01-30 08:07:35 +0000 UTC" firstStartedPulling="2026-01-30 08:07:37.690948631 +0000 UTC m=+4971.319300812" lastFinishedPulling="2026-01-30 08:07:41.43907157 +0000 UTC m=+4975.067423751" observedRunningTime="2026-01-30 08:07:41.936611597 +0000 UTC m=+4975.564963798" watchObservedRunningTime="2026-01-30 08:07:41.949899931 +0000 UTC m=+4975.578252112" Jan 30 08:07:42 crc kubenswrapper[4520]: I0130 08:07:42.915242 4520 generic.go:334] "Generic (PLEG): container finished" podID="bb38e42e-bbd5-45da-8eda-644fd7930836" containerID="a4c38014ef53e890b8754310b7653fc304f94ab0d0fb06b954c076c90908b76d" exitCode=0 Jan 30 08:07:42 crc kubenswrapper[4520]: I0130 08:07:42.915835 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sfr5n" event={"ID":"bb38e42e-bbd5-45da-8eda-644fd7930836","Type":"ContainerDied","Data":"a4c38014ef53e890b8754310b7653fc304f94ab0d0fb06b954c076c90908b76d"} Jan 30 08:07:42 crc kubenswrapper[4520]: I0130 08:07:42.929937 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2mdwz" event={"ID":"24eb2753-e850-4ecd-a84e-51cc05dd6be4","Type":"ContainerStarted","Data":"9e960d42307aaf5d60de19308f1995dec21f7177da2a95e9658223050b880eff"} Jan 30 08:07:43 crc kubenswrapper[4520]: I0130 08:07:43.027968 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2mdwz" podStartSLOduration=3.295886834 podStartE2EDuration="9.027948988s" podCreationTimestamp="2026-01-30 08:07:34 +0000 UTC" firstStartedPulling="2026-01-30 08:07:36.680123653 +0000 UTC m=+4970.308475835" lastFinishedPulling="2026-01-30 08:07:42.412185808 +0000 UTC m=+4976.040537989" observedRunningTime="2026-01-30 08:07:43.014255051 +0000 UTC m=+4976.642607232" watchObservedRunningTime="2026-01-30 08:07:43.027948988 +0000 UTC m=+4976.656301169" Jan 30 08:07:43 crc kubenswrapper[4520]: I0130 08:07:43.946620 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sfr5n" event={"ID":"bb38e42e-bbd5-45da-8eda-644fd7930836","Type":"ContainerStarted","Data":"676c1fab44a08218507402170c10064bd223df6227b122f656837c2522d65de8"} Jan 30 08:07:43 crc kubenswrapper[4520]: I0130 08:07:43.971817 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sfr5n" podStartSLOduration=3.429994297 podStartE2EDuration="6.971801059s" podCreationTimestamp="2026-01-30 08:07:37 +0000 UTC" firstStartedPulling="2026-01-30 08:07:39.858162163 +0000 UTC m=+4973.486514343" lastFinishedPulling="2026-01-30 08:07:43.399968924 +0000 UTC m=+4977.028321105" observedRunningTime="2026-01-30 08:07:43.967714737 +0000 UTC m=+4977.596066919" watchObservedRunningTime="2026-01-30 08:07:43.971801059 +0000 UTC m=+4977.600153241" Jan 30 08:07:44 crc kubenswrapper[4520]: I0130 08:07:44.799359 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2mdwz" Jan 30 08:07:44 crc kubenswrapper[4520]: I0130 08:07:44.799410 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2mdwz" Jan 30 08:07:45 crc kubenswrapper[4520]: I0130 08:07:45.604586 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wbvbs" Jan 30 08:07:45 crc kubenswrapper[4520]: I0130 08:07:45.604724 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wbvbs" Jan 30 08:07:45 crc kubenswrapper[4520]: I0130 08:07:45.842280 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2mdwz" podUID="24eb2753-e850-4ecd-a84e-51cc05dd6be4" containerName="registry-server" probeResult="failure" output=< Jan 30 08:07:45 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 08:07:45 crc kubenswrapper[4520]: > Jan 30 08:07:46 crc kubenswrapper[4520]: I0130 08:07:46.639643 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-wbvbs" podUID="b72e5ed5-8856-47f1-ba20-36360806a587" containerName="registry-server" probeResult="failure" output=< Jan 30 08:07:46 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 08:07:46 crc kubenswrapper[4520]: > Jan 30 08:07:48 crc kubenswrapper[4520]: I0130 08:07:48.011632 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sfr5n" Jan 30 08:07:48 crc kubenswrapper[4520]: I0130 08:07:48.012004 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sfr5n" Jan 30 08:07:48 crc kubenswrapper[4520]: I0130 08:07:48.187860 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sfr5n" Jan 30 08:07:49 crc kubenswrapper[4520]: I0130 08:07:49.025223 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sfr5n" Jan 30 08:07:50 crc kubenswrapper[4520]: I0130 08:07:50.062059 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sfr5n"] Jan 30 08:07:51 crc kubenswrapper[4520]: I0130 08:07:51.002791 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sfr5n" podUID="bb38e42e-bbd5-45da-8eda-644fd7930836" containerName="registry-server" containerID="cri-o://676c1fab44a08218507402170c10064bd223df6227b122f656837c2522d65de8" gracePeriod=2 Jan 30 08:07:51 crc kubenswrapper[4520]: I0130 08:07:51.728413 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sfr5n" Jan 30 08:07:51 crc kubenswrapper[4520]: I0130 08:07:51.820428 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb38e42e-bbd5-45da-8eda-644fd7930836-utilities\") pod \"bb38e42e-bbd5-45da-8eda-644fd7930836\" (UID: \"bb38e42e-bbd5-45da-8eda-644fd7930836\") " Jan 30 08:07:51 crc kubenswrapper[4520]: I0130 08:07:51.820498 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drhh6\" (UniqueName: \"kubernetes.io/projected/bb38e42e-bbd5-45da-8eda-644fd7930836-kube-api-access-drhh6\") pod \"bb38e42e-bbd5-45da-8eda-644fd7930836\" (UID: \"bb38e42e-bbd5-45da-8eda-644fd7930836\") " Jan 30 08:07:51 crc kubenswrapper[4520]: I0130 08:07:51.820598 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb38e42e-bbd5-45da-8eda-644fd7930836-catalog-content\") pod \"bb38e42e-bbd5-45da-8eda-644fd7930836\" (UID: \"bb38e42e-bbd5-45da-8eda-644fd7930836\") " Jan 30 08:07:51 crc kubenswrapper[4520]: I0130 08:07:51.821384 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb38e42e-bbd5-45da-8eda-644fd7930836-utilities" (OuterVolumeSpecName: "utilities") pod "bb38e42e-bbd5-45da-8eda-644fd7930836" (UID: "bb38e42e-bbd5-45da-8eda-644fd7930836"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:07:51 crc kubenswrapper[4520]: I0130 08:07:51.837973 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb38e42e-bbd5-45da-8eda-644fd7930836-kube-api-access-drhh6" (OuterVolumeSpecName: "kube-api-access-drhh6") pod "bb38e42e-bbd5-45da-8eda-644fd7930836" (UID: "bb38e42e-bbd5-45da-8eda-644fd7930836"). InnerVolumeSpecName "kube-api-access-drhh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:07:51 crc kubenswrapper[4520]: I0130 08:07:51.866606 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb38e42e-bbd5-45da-8eda-644fd7930836-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb38e42e-bbd5-45da-8eda-644fd7930836" (UID: "bb38e42e-bbd5-45da-8eda-644fd7930836"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:07:51 crc kubenswrapper[4520]: I0130 08:07:51.923396 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb38e42e-bbd5-45da-8eda-644fd7930836-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:07:51 crc kubenswrapper[4520]: I0130 08:07:51.923429 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drhh6\" (UniqueName: \"kubernetes.io/projected/bb38e42e-bbd5-45da-8eda-644fd7930836-kube-api-access-drhh6\") on node \"crc\" DevicePath \"\"" Jan 30 08:07:51 crc kubenswrapper[4520]: I0130 08:07:51.923442 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb38e42e-bbd5-45da-8eda-644fd7930836-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:07:52 crc kubenswrapper[4520]: I0130 08:07:52.013094 4520 generic.go:334] "Generic (PLEG): container finished" podID="bb38e42e-bbd5-45da-8eda-644fd7930836" containerID="676c1fab44a08218507402170c10064bd223df6227b122f656837c2522d65de8" exitCode=0 Jan 30 08:07:52 crc kubenswrapper[4520]: I0130 08:07:52.013139 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sfr5n" event={"ID":"bb38e42e-bbd5-45da-8eda-644fd7930836","Type":"ContainerDied","Data":"676c1fab44a08218507402170c10064bd223df6227b122f656837c2522d65de8"} Jan 30 08:07:52 crc kubenswrapper[4520]: I0130 08:07:52.013178 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sfr5n" event={"ID":"bb38e42e-bbd5-45da-8eda-644fd7930836","Type":"ContainerDied","Data":"1afc99d67603a14c817e74ea3b93ae0e13d26c0f3f20c7003acc64a5bb9c42a3"} Jan 30 08:07:52 crc kubenswrapper[4520]: I0130 08:07:52.013191 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sfr5n" Jan 30 08:07:52 crc kubenswrapper[4520]: I0130 08:07:52.013201 4520 scope.go:117] "RemoveContainer" containerID="676c1fab44a08218507402170c10064bd223df6227b122f656837c2522d65de8" Jan 30 08:07:52 crc kubenswrapper[4520]: I0130 08:07:52.040597 4520 scope.go:117] "RemoveContainer" containerID="a4c38014ef53e890b8754310b7653fc304f94ab0d0fb06b954c076c90908b76d" Jan 30 08:07:52 crc kubenswrapper[4520]: I0130 08:07:52.043674 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sfr5n"] Jan 30 08:07:52 crc kubenswrapper[4520]: I0130 08:07:52.053416 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sfr5n"] Jan 30 08:07:52 crc kubenswrapper[4520]: I0130 08:07:52.078815 4520 scope.go:117] "RemoveContainer" containerID="908d8f3d63c1a168c1610c100f36af9e6b5537d2cc7893fa990d3c7165072c7e" Jan 30 08:07:52 crc kubenswrapper[4520]: I0130 08:07:52.111267 4520 scope.go:117] "RemoveContainer" containerID="676c1fab44a08218507402170c10064bd223df6227b122f656837c2522d65de8" Jan 30 08:07:52 crc kubenswrapper[4520]: E0130 08:07:52.112301 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"676c1fab44a08218507402170c10064bd223df6227b122f656837c2522d65de8\": container with ID starting with 676c1fab44a08218507402170c10064bd223df6227b122f656837c2522d65de8 not found: ID does not exist" containerID="676c1fab44a08218507402170c10064bd223df6227b122f656837c2522d65de8" Jan 30 08:07:52 crc kubenswrapper[4520]: I0130 08:07:52.112337 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"676c1fab44a08218507402170c10064bd223df6227b122f656837c2522d65de8"} err="failed to get container status \"676c1fab44a08218507402170c10064bd223df6227b122f656837c2522d65de8\": rpc error: code = NotFound desc = could not find container \"676c1fab44a08218507402170c10064bd223df6227b122f656837c2522d65de8\": container with ID starting with 676c1fab44a08218507402170c10064bd223df6227b122f656837c2522d65de8 not found: ID does not exist" Jan 30 08:07:52 crc kubenswrapper[4520]: I0130 08:07:52.112362 4520 scope.go:117] "RemoveContainer" containerID="a4c38014ef53e890b8754310b7653fc304f94ab0d0fb06b954c076c90908b76d" Jan 30 08:07:52 crc kubenswrapper[4520]: E0130 08:07:52.112678 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4c38014ef53e890b8754310b7653fc304f94ab0d0fb06b954c076c90908b76d\": container with ID starting with a4c38014ef53e890b8754310b7653fc304f94ab0d0fb06b954c076c90908b76d not found: ID does not exist" containerID="a4c38014ef53e890b8754310b7653fc304f94ab0d0fb06b954c076c90908b76d" Jan 30 08:07:52 crc kubenswrapper[4520]: I0130 08:07:52.112704 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4c38014ef53e890b8754310b7653fc304f94ab0d0fb06b954c076c90908b76d"} err="failed to get container status \"a4c38014ef53e890b8754310b7653fc304f94ab0d0fb06b954c076c90908b76d\": rpc error: code = NotFound desc = could not find container \"a4c38014ef53e890b8754310b7653fc304f94ab0d0fb06b954c076c90908b76d\": container with ID starting with a4c38014ef53e890b8754310b7653fc304f94ab0d0fb06b954c076c90908b76d not found: ID does not exist" Jan 30 08:07:52 crc kubenswrapper[4520]: I0130 08:07:52.112718 4520 scope.go:117] "RemoveContainer" containerID="908d8f3d63c1a168c1610c100f36af9e6b5537d2cc7893fa990d3c7165072c7e" Jan 30 08:07:52 crc kubenswrapper[4520]: E0130 08:07:52.112937 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"908d8f3d63c1a168c1610c100f36af9e6b5537d2cc7893fa990d3c7165072c7e\": container with ID starting with 908d8f3d63c1a168c1610c100f36af9e6b5537d2cc7893fa990d3c7165072c7e not found: ID does not exist" containerID="908d8f3d63c1a168c1610c100f36af9e6b5537d2cc7893fa990d3c7165072c7e" Jan 30 08:07:52 crc kubenswrapper[4520]: I0130 08:07:52.112958 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"908d8f3d63c1a168c1610c100f36af9e6b5537d2cc7893fa990d3c7165072c7e"} err="failed to get container status \"908d8f3d63c1a168c1610c100f36af9e6b5537d2cc7893fa990d3c7165072c7e\": rpc error: code = NotFound desc = could not find container \"908d8f3d63c1a168c1610c100f36af9e6b5537d2cc7893fa990d3c7165072c7e\": container with ID starting with 908d8f3d63c1a168c1610c100f36af9e6b5537d2cc7893fa990d3c7165072c7e not found: ID does not exist" Jan 30 08:07:52 crc kubenswrapper[4520]: I0130 08:07:52.696905 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb38e42e-bbd5-45da-8eda-644fd7930836" path="/var/lib/kubelet/pods/bb38e42e-bbd5-45da-8eda-644fd7930836/volumes" Jan 30 08:07:55 crc kubenswrapper[4520]: I0130 08:07:55.642721 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wbvbs" Jan 30 08:07:55 crc kubenswrapper[4520]: I0130 08:07:55.685384 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wbvbs" Jan 30 08:07:55 crc kubenswrapper[4520]: I0130 08:07:55.838133 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2mdwz" podUID="24eb2753-e850-4ecd-a84e-51cc05dd6be4" containerName="registry-server" probeResult="failure" output=< Jan 30 08:07:55 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 08:07:55 crc kubenswrapper[4520]: > Jan 30 08:07:55 crc kubenswrapper[4520]: I0130 08:07:55.876247 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wbvbs"] Jan 30 08:07:57 crc kubenswrapper[4520]: I0130 08:07:57.053221 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wbvbs" podUID="b72e5ed5-8856-47f1-ba20-36360806a587" containerName="registry-server" containerID="cri-o://023f07f1234519dfa227cd0bc00d8ad1f14d57c89de6e5543c78abfe9105eac3" gracePeriod=2 Jan 30 08:07:57 crc kubenswrapper[4520]: I0130 08:07:57.582266 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wbvbs" Jan 30 08:07:57 crc kubenswrapper[4520]: I0130 08:07:57.752202 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl7tn\" (UniqueName: \"kubernetes.io/projected/b72e5ed5-8856-47f1-ba20-36360806a587-kube-api-access-jl7tn\") pod \"b72e5ed5-8856-47f1-ba20-36360806a587\" (UID: \"b72e5ed5-8856-47f1-ba20-36360806a587\") " Jan 30 08:07:57 crc kubenswrapper[4520]: I0130 08:07:57.752909 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b72e5ed5-8856-47f1-ba20-36360806a587-utilities\") pod \"b72e5ed5-8856-47f1-ba20-36360806a587\" (UID: \"b72e5ed5-8856-47f1-ba20-36360806a587\") " Jan 30 08:07:57 crc kubenswrapper[4520]: I0130 08:07:57.753325 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b72e5ed5-8856-47f1-ba20-36360806a587-catalog-content\") pod \"b72e5ed5-8856-47f1-ba20-36360806a587\" (UID: \"b72e5ed5-8856-47f1-ba20-36360806a587\") " Jan 30 08:07:57 crc kubenswrapper[4520]: I0130 08:07:57.754061 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b72e5ed5-8856-47f1-ba20-36360806a587-utilities" (OuterVolumeSpecName: "utilities") pod "b72e5ed5-8856-47f1-ba20-36360806a587" (UID: "b72e5ed5-8856-47f1-ba20-36360806a587"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:07:57 crc kubenswrapper[4520]: I0130 08:07:57.762657 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b72e5ed5-8856-47f1-ba20-36360806a587-kube-api-access-jl7tn" (OuterVolumeSpecName: "kube-api-access-jl7tn") pod "b72e5ed5-8856-47f1-ba20-36360806a587" (UID: "b72e5ed5-8856-47f1-ba20-36360806a587"). InnerVolumeSpecName "kube-api-access-jl7tn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:07:57 crc kubenswrapper[4520]: I0130 08:07:57.806446 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b72e5ed5-8856-47f1-ba20-36360806a587-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b72e5ed5-8856-47f1-ba20-36360806a587" (UID: "b72e5ed5-8856-47f1-ba20-36360806a587"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:07:57 crc kubenswrapper[4520]: I0130 08:07:57.856153 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jl7tn\" (UniqueName: \"kubernetes.io/projected/b72e5ed5-8856-47f1-ba20-36360806a587-kube-api-access-jl7tn\") on node \"crc\" DevicePath \"\"" Jan 30 08:07:57 crc kubenswrapper[4520]: I0130 08:07:57.856192 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b72e5ed5-8856-47f1-ba20-36360806a587-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:07:57 crc kubenswrapper[4520]: I0130 08:07:57.856204 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b72e5ed5-8856-47f1-ba20-36360806a587-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:07:58 crc kubenswrapper[4520]: I0130 08:07:58.064290 4520 generic.go:334] "Generic (PLEG): container finished" podID="b72e5ed5-8856-47f1-ba20-36360806a587" containerID="023f07f1234519dfa227cd0bc00d8ad1f14d57c89de6e5543c78abfe9105eac3" exitCode=0 Jan 30 08:07:58 crc kubenswrapper[4520]: I0130 08:07:58.064344 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wbvbs" event={"ID":"b72e5ed5-8856-47f1-ba20-36360806a587","Type":"ContainerDied","Data":"023f07f1234519dfa227cd0bc00d8ad1f14d57c89de6e5543c78abfe9105eac3"} Jan 30 08:07:58 crc kubenswrapper[4520]: I0130 08:07:58.064375 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wbvbs" event={"ID":"b72e5ed5-8856-47f1-ba20-36360806a587","Type":"ContainerDied","Data":"a7bbbe6b4e1b8839ccf2b10c20958926bce4f313e1cfd1ce2799d40651412677"} Jan 30 08:07:58 crc kubenswrapper[4520]: I0130 08:07:58.064397 4520 scope.go:117] "RemoveContainer" containerID="023f07f1234519dfa227cd0bc00d8ad1f14d57c89de6e5543c78abfe9105eac3" Jan 30 08:07:58 crc kubenswrapper[4520]: I0130 08:07:58.064544 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wbvbs" Jan 30 08:07:58 crc kubenswrapper[4520]: I0130 08:07:58.085878 4520 scope.go:117] "RemoveContainer" containerID="5a91e6f90b0d913237d23c73cc4dc8592f35d29e85b8313de8539dcf0cbce431" Jan 30 08:07:58 crc kubenswrapper[4520]: I0130 08:07:58.097573 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wbvbs"] Jan 30 08:07:58 crc kubenswrapper[4520]: I0130 08:07:58.121352 4520 scope.go:117] "RemoveContainer" containerID="a192741e8d929b74ca3bc25007e8e082c7e2bb1e69037f0a7efc3933242e9277" Jan 30 08:07:58 crc kubenswrapper[4520]: I0130 08:07:58.124713 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wbvbs"] Jan 30 08:07:58 crc kubenswrapper[4520]: I0130 08:07:58.162832 4520 scope.go:117] "RemoveContainer" containerID="023f07f1234519dfa227cd0bc00d8ad1f14d57c89de6e5543c78abfe9105eac3" Jan 30 08:07:58 crc kubenswrapper[4520]: E0130 08:07:58.163188 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"023f07f1234519dfa227cd0bc00d8ad1f14d57c89de6e5543c78abfe9105eac3\": container with ID starting with 023f07f1234519dfa227cd0bc00d8ad1f14d57c89de6e5543c78abfe9105eac3 not found: ID does not exist" containerID="023f07f1234519dfa227cd0bc00d8ad1f14d57c89de6e5543c78abfe9105eac3" Jan 30 08:07:58 crc kubenswrapper[4520]: I0130 08:07:58.163229 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"023f07f1234519dfa227cd0bc00d8ad1f14d57c89de6e5543c78abfe9105eac3"} err="failed to get container status \"023f07f1234519dfa227cd0bc00d8ad1f14d57c89de6e5543c78abfe9105eac3\": rpc error: code = NotFound desc = could not find container \"023f07f1234519dfa227cd0bc00d8ad1f14d57c89de6e5543c78abfe9105eac3\": container with ID starting with 023f07f1234519dfa227cd0bc00d8ad1f14d57c89de6e5543c78abfe9105eac3 not found: ID does not exist" Jan 30 08:07:58 crc kubenswrapper[4520]: I0130 08:07:58.163253 4520 scope.go:117] "RemoveContainer" containerID="5a91e6f90b0d913237d23c73cc4dc8592f35d29e85b8313de8539dcf0cbce431" Jan 30 08:07:58 crc kubenswrapper[4520]: E0130 08:07:58.163859 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a91e6f90b0d913237d23c73cc4dc8592f35d29e85b8313de8539dcf0cbce431\": container with ID starting with 5a91e6f90b0d913237d23c73cc4dc8592f35d29e85b8313de8539dcf0cbce431 not found: ID does not exist" containerID="5a91e6f90b0d913237d23c73cc4dc8592f35d29e85b8313de8539dcf0cbce431" Jan 30 08:07:58 crc kubenswrapper[4520]: I0130 08:07:58.163905 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a91e6f90b0d913237d23c73cc4dc8592f35d29e85b8313de8539dcf0cbce431"} err="failed to get container status \"5a91e6f90b0d913237d23c73cc4dc8592f35d29e85b8313de8539dcf0cbce431\": rpc error: code = NotFound desc = could not find container \"5a91e6f90b0d913237d23c73cc4dc8592f35d29e85b8313de8539dcf0cbce431\": container with ID starting with 5a91e6f90b0d913237d23c73cc4dc8592f35d29e85b8313de8539dcf0cbce431 not found: ID does not exist" Jan 30 08:07:58 crc kubenswrapper[4520]: I0130 08:07:58.163938 4520 scope.go:117] "RemoveContainer" containerID="a192741e8d929b74ca3bc25007e8e082c7e2bb1e69037f0a7efc3933242e9277" Jan 30 08:07:58 crc kubenswrapper[4520]: E0130 08:07:58.164359 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a192741e8d929b74ca3bc25007e8e082c7e2bb1e69037f0a7efc3933242e9277\": container with ID starting with a192741e8d929b74ca3bc25007e8e082c7e2bb1e69037f0a7efc3933242e9277 not found: ID does not exist" containerID="a192741e8d929b74ca3bc25007e8e082c7e2bb1e69037f0a7efc3933242e9277" Jan 30 08:07:58 crc kubenswrapper[4520]: I0130 08:07:58.164394 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a192741e8d929b74ca3bc25007e8e082c7e2bb1e69037f0a7efc3933242e9277"} err="failed to get container status \"a192741e8d929b74ca3bc25007e8e082c7e2bb1e69037f0a7efc3933242e9277\": rpc error: code = NotFound desc = could not find container \"a192741e8d929b74ca3bc25007e8e082c7e2bb1e69037f0a7efc3933242e9277\": container with ID starting with a192741e8d929b74ca3bc25007e8e082c7e2bb1e69037f0a7efc3933242e9277 not found: ID does not exist" Jan 30 08:07:58 crc kubenswrapper[4520]: I0130 08:07:58.713850 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b72e5ed5-8856-47f1-ba20-36360806a587" path="/var/lib/kubelet/pods/b72e5ed5-8856-47f1-ba20-36360806a587/volumes" Jan 30 08:08:04 crc kubenswrapper[4520]: I0130 08:08:04.841770 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2mdwz" Jan 30 08:08:04 crc kubenswrapper[4520]: I0130 08:08:04.886500 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2mdwz" Jan 30 08:08:05 crc kubenswrapper[4520]: I0130 08:08:05.078248 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2mdwz"] Jan 30 08:08:06 crc kubenswrapper[4520]: I0130 08:08:06.137656 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2mdwz" podUID="24eb2753-e850-4ecd-a84e-51cc05dd6be4" containerName="registry-server" containerID="cri-o://9e960d42307aaf5d60de19308f1995dec21f7177da2a95e9658223050b880eff" gracePeriod=2 Jan 30 08:08:06 crc kubenswrapper[4520]: I0130 08:08:06.563681 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2mdwz" Jan 30 08:08:06 crc kubenswrapper[4520]: I0130 08:08:06.642600 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24eb2753-e850-4ecd-a84e-51cc05dd6be4-utilities\") pod \"24eb2753-e850-4ecd-a84e-51cc05dd6be4\" (UID: \"24eb2753-e850-4ecd-a84e-51cc05dd6be4\") " Jan 30 08:08:06 crc kubenswrapper[4520]: I0130 08:08:06.642826 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24eb2753-e850-4ecd-a84e-51cc05dd6be4-catalog-content\") pod \"24eb2753-e850-4ecd-a84e-51cc05dd6be4\" (UID: \"24eb2753-e850-4ecd-a84e-51cc05dd6be4\") " Jan 30 08:08:06 crc kubenswrapper[4520]: I0130 08:08:06.645001 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24eb2753-e850-4ecd-a84e-51cc05dd6be4-utilities" (OuterVolumeSpecName: "utilities") pod "24eb2753-e850-4ecd-a84e-51cc05dd6be4" (UID: "24eb2753-e850-4ecd-a84e-51cc05dd6be4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:08:06 crc kubenswrapper[4520]: I0130 08:08:06.743603 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24eb2753-e850-4ecd-a84e-51cc05dd6be4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24eb2753-e850-4ecd-a84e-51cc05dd6be4" (UID: "24eb2753-e850-4ecd-a84e-51cc05dd6be4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:08:06 crc kubenswrapper[4520]: I0130 08:08:06.744065 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4vsn\" (UniqueName: \"kubernetes.io/projected/24eb2753-e850-4ecd-a84e-51cc05dd6be4-kube-api-access-k4vsn\") pod \"24eb2753-e850-4ecd-a84e-51cc05dd6be4\" (UID: \"24eb2753-e850-4ecd-a84e-51cc05dd6be4\") " Jan 30 08:08:06 crc kubenswrapper[4520]: I0130 08:08:06.745319 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24eb2753-e850-4ecd-a84e-51cc05dd6be4-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:08:06 crc kubenswrapper[4520]: I0130 08:08:06.745404 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24eb2753-e850-4ecd-a84e-51cc05dd6be4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:08:06 crc kubenswrapper[4520]: I0130 08:08:06.749460 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24eb2753-e850-4ecd-a84e-51cc05dd6be4-kube-api-access-k4vsn" (OuterVolumeSpecName: "kube-api-access-k4vsn") pod "24eb2753-e850-4ecd-a84e-51cc05dd6be4" (UID: "24eb2753-e850-4ecd-a84e-51cc05dd6be4"). InnerVolumeSpecName "kube-api-access-k4vsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:08:06 crc kubenswrapper[4520]: I0130 08:08:06.848549 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4vsn\" (UniqueName: \"kubernetes.io/projected/24eb2753-e850-4ecd-a84e-51cc05dd6be4-kube-api-access-k4vsn\") on node \"crc\" DevicePath \"\"" Jan 30 08:08:07 crc kubenswrapper[4520]: I0130 08:08:07.148239 4520 generic.go:334] "Generic (PLEG): container finished" podID="24eb2753-e850-4ecd-a84e-51cc05dd6be4" containerID="9e960d42307aaf5d60de19308f1995dec21f7177da2a95e9658223050b880eff" exitCode=0 Jan 30 08:08:07 crc kubenswrapper[4520]: I0130 08:08:07.148291 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2mdwz" event={"ID":"24eb2753-e850-4ecd-a84e-51cc05dd6be4","Type":"ContainerDied","Data":"9e960d42307aaf5d60de19308f1995dec21f7177da2a95e9658223050b880eff"} Jan 30 08:08:07 crc kubenswrapper[4520]: I0130 08:08:07.148336 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2mdwz" event={"ID":"24eb2753-e850-4ecd-a84e-51cc05dd6be4","Type":"ContainerDied","Data":"b73c9823ffebc8af5498d6a536c976fea3b4a7b4c3cc037d3ddf93bf11a5e527"} Jan 30 08:08:07 crc kubenswrapper[4520]: I0130 08:08:07.148377 4520 scope.go:117] "RemoveContainer" containerID="9e960d42307aaf5d60de19308f1995dec21f7177da2a95e9658223050b880eff" Jan 30 08:08:07 crc kubenswrapper[4520]: I0130 08:08:07.149088 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2mdwz" Jan 30 08:08:07 crc kubenswrapper[4520]: I0130 08:08:07.174833 4520 scope.go:117] "RemoveContainer" containerID="152c672d00bb1e2a4d5bccf50de32b1a64bb8878144502146945950e9b568cd1" Jan 30 08:08:07 crc kubenswrapper[4520]: I0130 08:08:07.184946 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2mdwz"] Jan 30 08:08:07 crc kubenswrapper[4520]: I0130 08:08:07.194217 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2mdwz"] Jan 30 08:08:07 crc kubenswrapper[4520]: I0130 08:08:07.203673 4520 scope.go:117] "RemoveContainer" containerID="9a8fbcee8169dddeccab69ee14a73abcf30509d2b19a17d9337b49a6225fb472" Jan 30 08:08:07 crc kubenswrapper[4520]: I0130 08:08:07.243568 4520 scope.go:117] "RemoveContainer" containerID="9e960d42307aaf5d60de19308f1995dec21f7177da2a95e9658223050b880eff" Jan 30 08:08:07 crc kubenswrapper[4520]: E0130 08:08:07.244241 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e960d42307aaf5d60de19308f1995dec21f7177da2a95e9658223050b880eff\": container with ID starting with 9e960d42307aaf5d60de19308f1995dec21f7177da2a95e9658223050b880eff not found: ID does not exist" containerID="9e960d42307aaf5d60de19308f1995dec21f7177da2a95e9658223050b880eff" Jan 30 08:08:07 crc kubenswrapper[4520]: I0130 08:08:07.244283 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e960d42307aaf5d60de19308f1995dec21f7177da2a95e9658223050b880eff"} err="failed to get container status \"9e960d42307aaf5d60de19308f1995dec21f7177da2a95e9658223050b880eff\": rpc error: code = NotFound desc = could not find container \"9e960d42307aaf5d60de19308f1995dec21f7177da2a95e9658223050b880eff\": container with ID starting with 9e960d42307aaf5d60de19308f1995dec21f7177da2a95e9658223050b880eff not found: ID does not exist" Jan 30 08:08:07 crc kubenswrapper[4520]: I0130 08:08:07.244311 4520 scope.go:117] "RemoveContainer" containerID="152c672d00bb1e2a4d5bccf50de32b1a64bb8878144502146945950e9b568cd1" Jan 30 08:08:07 crc kubenswrapper[4520]: E0130 08:08:07.244700 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"152c672d00bb1e2a4d5bccf50de32b1a64bb8878144502146945950e9b568cd1\": container with ID starting with 152c672d00bb1e2a4d5bccf50de32b1a64bb8878144502146945950e9b568cd1 not found: ID does not exist" containerID="152c672d00bb1e2a4d5bccf50de32b1a64bb8878144502146945950e9b568cd1" Jan 30 08:08:07 crc kubenswrapper[4520]: I0130 08:08:07.244722 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"152c672d00bb1e2a4d5bccf50de32b1a64bb8878144502146945950e9b568cd1"} err="failed to get container status \"152c672d00bb1e2a4d5bccf50de32b1a64bb8878144502146945950e9b568cd1\": rpc error: code = NotFound desc = could not find container \"152c672d00bb1e2a4d5bccf50de32b1a64bb8878144502146945950e9b568cd1\": container with ID starting with 152c672d00bb1e2a4d5bccf50de32b1a64bb8878144502146945950e9b568cd1 not found: ID does not exist" Jan 30 08:08:07 crc kubenswrapper[4520]: I0130 08:08:07.244738 4520 scope.go:117] "RemoveContainer" containerID="9a8fbcee8169dddeccab69ee14a73abcf30509d2b19a17d9337b49a6225fb472" Jan 30 08:08:07 crc kubenswrapper[4520]: E0130 08:08:07.244999 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a8fbcee8169dddeccab69ee14a73abcf30509d2b19a17d9337b49a6225fb472\": container with ID starting with 9a8fbcee8169dddeccab69ee14a73abcf30509d2b19a17d9337b49a6225fb472 not found: ID does not exist" containerID="9a8fbcee8169dddeccab69ee14a73abcf30509d2b19a17d9337b49a6225fb472" Jan 30 08:08:07 crc kubenswrapper[4520]: I0130 08:08:07.245022 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a8fbcee8169dddeccab69ee14a73abcf30509d2b19a17d9337b49a6225fb472"} err="failed to get container status \"9a8fbcee8169dddeccab69ee14a73abcf30509d2b19a17d9337b49a6225fb472\": rpc error: code = NotFound desc = could not find container \"9a8fbcee8169dddeccab69ee14a73abcf30509d2b19a17d9337b49a6225fb472\": container with ID starting with 9a8fbcee8169dddeccab69ee14a73abcf30509d2b19a17d9337b49a6225fb472 not found: ID does not exist" Jan 30 08:08:08 crc kubenswrapper[4520]: I0130 08:08:08.694044 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24eb2753-e850-4ecd-a84e-51cc05dd6be4" path="/var/lib/kubelet/pods/24eb2753-e850-4ecd-a84e-51cc05dd6be4/volumes" Jan 30 08:08:27 crc kubenswrapper[4520]: I0130 08:08:27.793176 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:08:27 crc kubenswrapper[4520]: I0130 08:08:27.793807 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:08:57 crc kubenswrapper[4520]: I0130 08:08:57.793286 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:08:57 crc kubenswrapper[4520]: I0130 08:08:57.793960 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:09:27 crc kubenswrapper[4520]: I0130 08:09:27.794194 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:09:27 crc kubenswrapper[4520]: I0130 08:09:27.794930 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:09:27 crc kubenswrapper[4520]: I0130 08:09:27.794989 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" Jan 30 08:09:27 crc kubenswrapper[4520]: I0130 08:09:27.796322 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f9818f2ae98bacfa393a4d8ebbb4fe38ee7080bf2654b52b1a9420fcf14b28e3"} pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 08:09:27 crc kubenswrapper[4520]: I0130 08:09:27.796401 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" containerID="cri-o://f9818f2ae98bacfa393a4d8ebbb4fe38ee7080bf2654b52b1a9420fcf14b28e3" gracePeriod=600 Jan 30 08:09:28 crc kubenswrapper[4520]: I0130 08:09:28.894510 4520 generic.go:334] "Generic (PLEG): container finished" podID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerID="f9818f2ae98bacfa393a4d8ebbb4fe38ee7080bf2654b52b1a9420fcf14b28e3" exitCode=0 Jan 30 08:09:28 crc kubenswrapper[4520]: I0130 08:09:28.894570 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerDied","Data":"f9818f2ae98bacfa393a4d8ebbb4fe38ee7080bf2654b52b1a9420fcf14b28e3"} Jan 30 08:09:28 crc kubenswrapper[4520]: I0130 08:09:28.895054 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerStarted","Data":"5bf99a70e835280e041759c379d0b5c1d28d20267306cf6c29f1e0b2bb51fcbb"} Jan 30 08:09:28 crc kubenswrapper[4520]: I0130 08:09:28.895111 4520 scope.go:117] "RemoveContainer" containerID="75e27a74f8e186a71cf9d6ec9744ee261fd1d47cfa9e79349b0e31fc7178aa81" Jan 30 08:09:46 crc kubenswrapper[4520]: E0130 08:09:46.459217 4520 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.25.87:60528->192.168.25.87:39417: write tcp 192.168.25.87:60528->192.168.25.87:39417: write: broken pipe Jan 30 08:11:57 crc kubenswrapper[4520]: I0130 08:11:57.793395 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:11:57 crc kubenswrapper[4520]: I0130 08:11:57.794135 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:12:27 crc kubenswrapper[4520]: I0130 08:12:27.793663 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:12:27 crc kubenswrapper[4520]: I0130 08:12:27.794178 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:12:57 crc kubenswrapper[4520]: I0130 08:12:57.793341 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:12:57 crc kubenswrapper[4520]: I0130 08:12:57.794009 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:12:57 crc kubenswrapper[4520]: I0130 08:12:57.794071 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" Jan 30 08:12:57 crc kubenswrapper[4520]: I0130 08:12:57.795071 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5bf99a70e835280e041759c379d0b5c1d28d20267306cf6c29f1e0b2bb51fcbb"} pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 08:12:57 crc kubenswrapper[4520]: I0130 08:12:57.795130 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" containerID="cri-o://5bf99a70e835280e041759c379d0b5c1d28d20267306cf6c29f1e0b2bb51fcbb" gracePeriod=600 Jan 30 08:12:57 crc kubenswrapper[4520]: E0130 08:12:57.914183 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:12:58 crc kubenswrapper[4520]: I0130 08:12:58.659890 4520 generic.go:334] "Generic (PLEG): container finished" podID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerID="5bf99a70e835280e041759c379d0b5c1d28d20267306cf6c29f1e0b2bb51fcbb" exitCode=0 Jan 30 08:12:58 crc kubenswrapper[4520]: I0130 08:12:58.660219 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerDied","Data":"5bf99a70e835280e041759c379d0b5c1d28d20267306cf6c29f1e0b2bb51fcbb"} Jan 30 08:12:58 crc kubenswrapper[4520]: I0130 08:12:58.660269 4520 scope.go:117] "RemoveContainer" containerID="f9818f2ae98bacfa393a4d8ebbb4fe38ee7080bf2654b52b1a9420fcf14b28e3" Jan 30 08:12:58 crc kubenswrapper[4520]: I0130 08:12:58.661818 4520 scope.go:117] "RemoveContainer" containerID="5bf99a70e835280e041759c379d0b5c1d28d20267306cf6c29f1e0b2bb51fcbb" Jan 30 08:12:58 crc kubenswrapper[4520]: E0130 08:12:58.662190 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:13:13 crc kubenswrapper[4520]: I0130 08:13:13.686483 4520 scope.go:117] "RemoveContainer" containerID="5bf99a70e835280e041759c379d0b5c1d28d20267306cf6c29f1e0b2bb51fcbb" Jan 30 08:13:13 crc kubenswrapper[4520]: E0130 08:13:13.687225 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:13:26 crc kubenswrapper[4520]: I0130 08:13:26.692394 4520 scope.go:117] "RemoveContainer" containerID="5bf99a70e835280e041759c379d0b5c1d28d20267306cf6c29f1e0b2bb51fcbb" Jan 30 08:13:26 crc kubenswrapper[4520]: E0130 08:13:26.693134 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:13:39 crc kubenswrapper[4520]: I0130 08:13:39.685749 4520 scope.go:117] "RemoveContainer" containerID="5bf99a70e835280e041759c379d0b5c1d28d20267306cf6c29f1e0b2bb51fcbb" Jan 30 08:13:39 crc kubenswrapper[4520]: E0130 08:13:39.686666 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:13:52 crc kubenswrapper[4520]: I0130 08:13:52.686851 4520 scope.go:117] "RemoveContainer" containerID="5bf99a70e835280e041759c379d0b5c1d28d20267306cf6c29f1e0b2bb51fcbb" Jan 30 08:13:52 crc kubenswrapper[4520]: E0130 08:13:52.687925 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:14:06 crc kubenswrapper[4520]: I0130 08:14:06.691770 4520 scope.go:117] "RemoveContainer" containerID="5bf99a70e835280e041759c379d0b5c1d28d20267306cf6c29f1e0b2bb51fcbb" Jan 30 08:14:06 crc kubenswrapper[4520]: E0130 08:14:06.692765 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:14:21 crc kubenswrapper[4520]: I0130 08:14:21.686244 4520 scope.go:117] "RemoveContainer" containerID="5bf99a70e835280e041759c379d0b5c1d28d20267306cf6c29f1e0b2bb51fcbb" Jan 30 08:14:21 crc kubenswrapper[4520]: E0130 08:14:21.687205 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:14:36 crc kubenswrapper[4520]: I0130 08:14:36.692127 4520 scope.go:117] "RemoveContainer" containerID="5bf99a70e835280e041759c379d0b5c1d28d20267306cf6c29f1e0b2bb51fcbb" Jan 30 08:14:36 crc kubenswrapper[4520]: E0130 08:14:36.693114 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:14:50 crc kubenswrapper[4520]: I0130 08:14:50.686122 4520 scope.go:117] "RemoveContainer" containerID="5bf99a70e835280e041759c379d0b5c1d28d20267306cf6c29f1e0b2bb51fcbb" Jan 30 08:14:50 crc kubenswrapper[4520]: E0130 08:14:50.687378 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:15:00 crc kubenswrapper[4520]: I0130 08:15:00.141712 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496015-2zl6n"] Jan 30 08:15:00 crc kubenswrapper[4520]: E0130 08:15:00.142500 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb38e42e-bbd5-45da-8eda-644fd7930836" containerName="extract-utilities" Jan 30 08:15:00 crc kubenswrapper[4520]: I0130 08:15:00.142548 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb38e42e-bbd5-45da-8eda-644fd7930836" containerName="extract-utilities" Jan 30 08:15:00 crc kubenswrapper[4520]: E0130 08:15:00.142579 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb38e42e-bbd5-45da-8eda-644fd7930836" containerName="extract-content" Jan 30 08:15:00 crc kubenswrapper[4520]: I0130 08:15:00.142585 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb38e42e-bbd5-45da-8eda-644fd7930836" containerName="extract-content" Jan 30 08:15:00 crc kubenswrapper[4520]: E0130 08:15:00.142594 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24eb2753-e850-4ecd-a84e-51cc05dd6be4" containerName="extract-utilities" Jan 30 08:15:00 crc kubenswrapper[4520]: I0130 08:15:00.142600 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="24eb2753-e850-4ecd-a84e-51cc05dd6be4" containerName="extract-utilities" Jan 30 08:15:00 crc kubenswrapper[4520]: E0130 08:15:00.142608 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b72e5ed5-8856-47f1-ba20-36360806a587" containerName="extract-content" Jan 30 08:15:00 crc kubenswrapper[4520]: I0130 08:15:00.142613 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="b72e5ed5-8856-47f1-ba20-36360806a587" containerName="extract-content" Jan 30 08:15:00 crc kubenswrapper[4520]: E0130 08:15:00.142626 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb38e42e-bbd5-45da-8eda-644fd7930836" containerName="registry-server" Jan 30 08:15:00 crc kubenswrapper[4520]: I0130 08:15:00.142631 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb38e42e-bbd5-45da-8eda-644fd7930836" containerName="registry-server" Jan 30 08:15:00 crc kubenswrapper[4520]: E0130 08:15:00.142645 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b72e5ed5-8856-47f1-ba20-36360806a587" containerName="registry-server" Jan 30 08:15:00 crc kubenswrapper[4520]: I0130 08:15:00.142650 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="b72e5ed5-8856-47f1-ba20-36360806a587" containerName="registry-server" Jan 30 08:15:00 crc kubenswrapper[4520]: E0130 08:15:00.142667 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24eb2753-e850-4ecd-a84e-51cc05dd6be4" containerName="extract-content" Jan 30 08:15:00 crc kubenswrapper[4520]: I0130 08:15:00.142672 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="24eb2753-e850-4ecd-a84e-51cc05dd6be4" containerName="extract-content" Jan 30 08:15:00 crc kubenswrapper[4520]: E0130 08:15:00.142683 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24eb2753-e850-4ecd-a84e-51cc05dd6be4" containerName="registry-server" Jan 30 08:15:00 crc kubenswrapper[4520]: I0130 08:15:00.142688 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="24eb2753-e850-4ecd-a84e-51cc05dd6be4" containerName="registry-server" Jan 30 08:15:00 crc kubenswrapper[4520]: E0130 08:15:00.142696 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b72e5ed5-8856-47f1-ba20-36360806a587" containerName="extract-utilities" Jan 30 08:15:00 crc kubenswrapper[4520]: I0130 08:15:00.142701 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="b72e5ed5-8856-47f1-ba20-36360806a587" containerName="extract-utilities" Jan 30 08:15:00 crc kubenswrapper[4520]: I0130 08:15:00.142861 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb38e42e-bbd5-45da-8eda-644fd7930836" containerName="registry-server" Jan 30 08:15:00 crc kubenswrapper[4520]: I0130 08:15:00.142881 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="24eb2753-e850-4ecd-a84e-51cc05dd6be4" containerName="registry-server" Jan 30 08:15:00 crc kubenswrapper[4520]: I0130 08:15:00.142894 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="b72e5ed5-8856-47f1-ba20-36360806a587" containerName="registry-server" Jan 30 08:15:00 crc kubenswrapper[4520]: I0130 08:15:00.143488 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496015-2zl6n" Jan 30 08:15:00 crc kubenswrapper[4520]: I0130 08:15:00.149705 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 08:15:00 crc kubenswrapper[4520]: I0130 08:15:00.151010 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 08:15:00 crc kubenswrapper[4520]: I0130 08:15:00.154127 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496015-2zl6n"] Jan 30 08:15:00 crc kubenswrapper[4520]: I0130 08:15:00.175847 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08-config-volume\") pod \"collect-profiles-29496015-2zl6n\" (UID: \"f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496015-2zl6n" Jan 30 08:15:00 crc kubenswrapper[4520]: I0130 08:15:00.175930 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m42vv\" (UniqueName: \"kubernetes.io/projected/f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08-kube-api-access-m42vv\") pod \"collect-profiles-29496015-2zl6n\" (UID: \"f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496015-2zl6n" Jan 30 08:15:00 crc kubenswrapper[4520]: I0130 08:15:00.175980 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08-secret-volume\") pod \"collect-profiles-29496015-2zl6n\" (UID: \"f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496015-2zl6n" Jan 30 08:15:00 crc kubenswrapper[4520]: I0130 08:15:00.277739 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08-secret-volume\") pod \"collect-profiles-29496015-2zl6n\" (UID: \"f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496015-2zl6n" Jan 30 08:15:00 crc kubenswrapper[4520]: I0130 08:15:00.277874 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08-config-volume\") pod \"collect-profiles-29496015-2zl6n\" (UID: \"f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496015-2zl6n" Jan 30 08:15:00 crc kubenswrapper[4520]: I0130 08:15:00.277929 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m42vv\" (UniqueName: \"kubernetes.io/projected/f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08-kube-api-access-m42vv\") pod \"collect-profiles-29496015-2zl6n\" (UID: \"f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496015-2zl6n" Jan 30 08:15:00 crc kubenswrapper[4520]: I0130 08:15:00.278991 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08-config-volume\") pod \"collect-profiles-29496015-2zl6n\" (UID: \"f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496015-2zl6n" Jan 30 08:15:00 crc kubenswrapper[4520]: I0130 08:15:00.284262 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08-secret-volume\") pod \"collect-profiles-29496015-2zl6n\" (UID: \"f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496015-2zl6n" Jan 30 08:15:00 crc kubenswrapper[4520]: I0130 08:15:00.293099 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m42vv\" (UniqueName: \"kubernetes.io/projected/f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08-kube-api-access-m42vv\") pod \"collect-profiles-29496015-2zl6n\" (UID: \"f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496015-2zl6n" Jan 30 08:15:00 crc kubenswrapper[4520]: I0130 08:15:00.458632 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496015-2zl6n" Jan 30 08:15:00 crc kubenswrapper[4520]: I0130 08:15:00.906132 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496015-2zl6n"] Jan 30 08:15:01 crc kubenswrapper[4520]: I0130 08:15:01.658142 4520 generic.go:334] "Generic (PLEG): container finished" podID="f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08" containerID="fd7cf3fe04931fb897a81e80b3441dd0feaca7afc4cc4dc9623dda1561941ef0" exitCode=0 Jan 30 08:15:01 crc kubenswrapper[4520]: I0130 08:15:01.658470 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496015-2zl6n" event={"ID":"f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08","Type":"ContainerDied","Data":"fd7cf3fe04931fb897a81e80b3441dd0feaca7afc4cc4dc9623dda1561941ef0"} Jan 30 08:15:01 crc kubenswrapper[4520]: I0130 08:15:01.658554 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496015-2zl6n" event={"ID":"f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08","Type":"ContainerStarted","Data":"a597ffcc12c06134a4c33c3c11bd941b0ea500792188b9673ba12b444a691616"} Jan 30 08:15:02 crc kubenswrapper[4520]: I0130 08:15:02.926168 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496015-2zl6n" Jan 30 08:15:03 crc kubenswrapper[4520]: I0130 08:15:03.027347 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08-secret-volume\") pod \"f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08\" (UID: \"f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08\") " Jan 30 08:15:03 crc kubenswrapper[4520]: I0130 08:15:03.027418 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08-config-volume\") pod \"f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08\" (UID: \"f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08\") " Jan 30 08:15:03 crc kubenswrapper[4520]: I0130 08:15:03.027689 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m42vv\" (UniqueName: \"kubernetes.io/projected/f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08-kube-api-access-m42vv\") pod \"f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08\" (UID: \"f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08\") " Jan 30 08:15:03 crc kubenswrapper[4520]: I0130 08:15:03.028198 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08-config-volume" (OuterVolumeSpecName: "config-volume") pod "f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08" (UID: "f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:15:03 crc kubenswrapper[4520]: I0130 08:15:03.028662 4520 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 08:15:03 crc kubenswrapper[4520]: I0130 08:15:03.034424 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08-kube-api-access-m42vv" (OuterVolumeSpecName: "kube-api-access-m42vv") pod "f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08" (UID: "f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08"). InnerVolumeSpecName "kube-api-access-m42vv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:15:03 crc kubenswrapper[4520]: I0130 08:15:03.035187 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08" (UID: "f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:15:03 crc kubenswrapper[4520]: I0130 08:15:03.129379 4520 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 08:15:03 crc kubenswrapper[4520]: I0130 08:15:03.129412 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m42vv\" (UniqueName: \"kubernetes.io/projected/f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08-kube-api-access-m42vv\") on node \"crc\" DevicePath \"\"" Jan 30 08:15:03 crc kubenswrapper[4520]: I0130 08:15:03.674348 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496015-2zl6n" event={"ID":"f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08","Type":"ContainerDied","Data":"a597ffcc12c06134a4c33c3c11bd941b0ea500792188b9673ba12b444a691616"} Jan 30 08:15:03 crc kubenswrapper[4520]: I0130 08:15:03.674695 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a597ffcc12c06134a4c33c3c11bd941b0ea500792188b9673ba12b444a691616" Jan 30 08:15:03 crc kubenswrapper[4520]: I0130 08:15:03.674399 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496015-2zl6n" Jan 30 08:15:04 crc kubenswrapper[4520]: I0130 08:15:04.009835 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495970-ndnq9"] Jan 30 08:15:04 crc kubenswrapper[4520]: I0130 08:15:04.017401 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495970-ndnq9"] Jan 30 08:15:04 crc kubenswrapper[4520]: I0130 08:15:04.696109 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f064475-503c-498a-a244-60ec4c850544" path="/var/lib/kubelet/pods/5f064475-503c-498a-a244-60ec4c850544/volumes" Jan 30 08:15:05 crc kubenswrapper[4520]: I0130 08:15:05.686196 4520 scope.go:117] "RemoveContainer" containerID="5bf99a70e835280e041759c379d0b5c1d28d20267306cf6c29f1e0b2bb51fcbb" Jan 30 08:15:05 crc kubenswrapper[4520]: E0130 08:15:05.686709 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:15:20 crc kubenswrapper[4520]: I0130 08:15:20.686268 4520 scope.go:117] "RemoveContainer" containerID="5bf99a70e835280e041759c379d0b5c1d28d20267306cf6c29f1e0b2bb51fcbb" Jan 30 08:15:20 crc kubenswrapper[4520]: E0130 08:15:20.687173 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:15:34 crc kubenswrapper[4520]: I0130 08:15:34.686652 4520 scope.go:117] "RemoveContainer" containerID="5bf99a70e835280e041759c379d0b5c1d28d20267306cf6c29f1e0b2bb51fcbb" Jan 30 08:15:34 crc kubenswrapper[4520]: E0130 08:15:34.687911 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:15:49 crc kubenswrapper[4520]: I0130 08:15:49.686263 4520 scope.go:117] "RemoveContainer" containerID="5bf99a70e835280e041759c379d0b5c1d28d20267306cf6c29f1e0b2bb51fcbb" Jan 30 08:15:49 crc kubenswrapper[4520]: E0130 08:15:49.688229 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:15:57 crc kubenswrapper[4520]: I0130 08:15:57.859369 4520 scope.go:117] "RemoveContainer" containerID="3c8d72686e0199f137341834da2fcb6279bdeeeef513acd33b269e419f473353" Jan 30 08:16:04 crc kubenswrapper[4520]: I0130 08:16:04.686311 4520 scope.go:117] "RemoveContainer" containerID="5bf99a70e835280e041759c379d0b5c1d28d20267306cf6c29f1e0b2bb51fcbb" Jan 30 08:16:04 crc kubenswrapper[4520]: E0130 08:16:04.687357 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:16:19 crc kubenswrapper[4520]: I0130 08:16:19.685954 4520 scope.go:117] "RemoveContainer" containerID="5bf99a70e835280e041759c379d0b5c1d28d20267306cf6c29f1e0b2bb51fcbb" Jan 30 08:16:19 crc kubenswrapper[4520]: E0130 08:16:19.687005 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:16:32 crc kubenswrapper[4520]: I0130 08:16:32.687671 4520 scope.go:117] "RemoveContainer" containerID="5bf99a70e835280e041759c379d0b5c1d28d20267306cf6c29f1e0b2bb51fcbb" Jan 30 08:16:32 crc kubenswrapper[4520]: E0130 08:16:32.688960 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:16:46 crc kubenswrapper[4520]: I0130 08:16:46.693842 4520 scope.go:117] "RemoveContainer" containerID="5bf99a70e835280e041759c379d0b5c1d28d20267306cf6c29f1e0b2bb51fcbb" Jan 30 08:16:46 crc kubenswrapper[4520]: E0130 08:16:46.696421 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:17:00 crc kubenswrapper[4520]: I0130 08:17:00.688228 4520 scope.go:117] "RemoveContainer" containerID="5bf99a70e835280e041759c379d0b5c1d28d20267306cf6c29f1e0b2bb51fcbb" Jan 30 08:17:00 crc kubenswrapper[4520]: E0130 08:17:00.689024 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:17:10 crc kubenswrapper[4520]: I0130 08:17:10.334772 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-67hpf"] Jan 30 08:17:10 crc kubenswrapper[4520]: E0130 08:17:10.336100 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08" containerName="collect-profiles" Jan 30 08:17:10 crc kubenswrapper[4520]: I0130 08:17:10.336116 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08" containerName="collect-profiles" Jan 30 08:17:10 crc kubenswrapper[4520]: I0130 08:17:10.336392 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7e0f24c-cb6c-427d-9dc6-1d3e66c59b08" containerName="collect-profiles" Jan 30 08:17:10 crc kubenswrapper[4520]: I0130 08:17:10.340737 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-67hpf" Jan 30 08:17:10 crc kubenswrapper[4520]: I0130 08:17:10.349340 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-67hpf"] Jan 30 08:17:10 crc kubenswrapper[4520]: I0130 08:17:10.356411 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/820453ca-187f-4302-a988-f156b62ae2ac-catalog-content\") pod \"redhat-marketplace-67hpf\" (UID: \"820453ca-187f-4302-a988-f156b62ae2ac\") " pod="openshift-marketplace/redhat-marketplace-67hpf" Jan 30 08:17:10 crc kubenswrapper[4520]: I0130 08:17:10.356780 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/820453ca-187f-4302-a988-f156b62ae2ac-utilities\") pod \"redhat-marketplace-67hpf\" (UID: \"820453ca-187f-4302-a988-f156b62ae2ac\") " pod="openshift-marketplace/redhat-marketplace-67hpf" Jan 30 08:17:10 crc kubenswrapper[4520]: I0130 08:17:10.356896 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzqfh\" (UniqueName: \"kubernetes.io/projected/820453ca-187f-4302-a988-f156b62ae2ac-kube-api-access-qzqfh\") pod \"redhat-marketplace-67hpf\" (UID: \"820453ca-187f-4302-a988-f156b62ae2ac\") " pod="openshift-marketplace/redhat-marketplace-67hpf" Jan 30 08:17:10 crc kubenswrapper[4520]: I0130 08:17:10.459180 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/820453ca-187f-4302-a988-f156b62ae2ac-utilities\") pod \"redhat-marketplace-67hpf\" (UID: \"820453ca-187f-4302-a988-f156b62ae2ac\") " pod="openshift-marketplace/redhat-marketplace-67hpf" Jan 30 08:17:10 crc kubenswrapper[4520]: I0130 08:17:10.459257 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzqfh\" (UniqueName: \"kubernetes.io/projected/820453ca-187f-4302-a988-f156b62ae2ac-kube-api-access-qzqfh\") pod \"redhat-marketplace-67hpf\" (UID: \"820453ca-187f-4302-a988-f156b62ae2ac\") " pod="openshift-marketplace/redhat-marketplace-67hpf" Jan 30 08:17:10 crc kubenswrapper[4520]: I0130 08:17:10.459368 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/820453ca-187f-4302-a988-f156b62ae2ac-catalog-content\") pod \"redhat-marketplace-67hpf\" (UID: \"820453ca-187f-4302-a988-f156b62ae2ac\") " pod="openshift-marketplace/redhat-marketplace-67hpf" Jan 30 08:17:10 crc kubenswrapper[4520]: I0130 08:17:10.459722 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/820453ca-187f-4302-a988-f156b62ae2ac-utilities\") pod \"redhat-marketplace-67hpf\" (UID: \"820453ca-187f-4302-a988-f156b62ae2ac\") " pod="openshift-marketplace/redhat-marketplace-67hpf" Jan 30 08:17:10 crc kubenswrapper[4520]: I0130 08:17:10.459728 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/820453ca-187f-4302-a988-f156b62ae2ac-catalog-content\") pod \"redhat-marketplace-67hpf\" (UID: \"820453ca-187f-4302-a988-f156b62ae2ac\") " pod="openshift-marketplace/redhat-marketplace-67hpf" Jan 30 08:17:10 crc kubenswrapper[4520]: I0130 08:17:10.479586 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzqfh\" (UniqueName: \"kubernetes.io/projected/820453ca-187f-4302-a988-f156b62ae2ac-kube-api-access-qzqfh\") pod \"redhat-marketplace-67hpf\" (UID: \"820453ca-187f-4302-a988-f156b62ae2ac\") " pod="openshift-marketplace/redhat-marketplace-67hpf" Jan 30 08:17:10 crc kubenswrapper[4520]: I0130 08:17:10.661992 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-67hpf" Jan 30 08:17:11 crc kubenswrapper[4520]: I0130 08:17:11.152808 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-67hpf"] Jan 30 08:17:11 crc kubenswrapper[4520]: I0130 08:17:11.921778 4520 generic.go:334] "Generic (PLEG): container finished" podID="820453ca-187f-4302-a988-f156b62ae2ac" containerID="990c4e7f202146fa28ea9604a0f7db9a3c57c173e60c07ef38b499778696c26c" exitCode=0 Jan 30 08:17:11 crc kubenswrapper[4520]: I0130 08:17:11.921910 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-67hpf" event={"ID":"820453ca-187f-4302-a988-f156b62ae2ac","Type":"ContainerDied","Data":"990c4e7f202146fa28ea9604a0f7db9a3c57c173e60c07ef38b499778696c26c"} Jan 30 08:17:11 crc kubenswrapper[4520]: I0130 08:17:11.922058 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-67hpf" event={"ID":"820453ca-187f-4302-a988-f156b62ae2ac","Type":"ContainerStarted","Data":"6142400de27f6af4fa618e1f788b1da3e2f0927e7352cca1f5e78640934431d3"} Jan 30 08:17:11 crc kubenswrapper[4520]: I0130 08:17:11.924504 4520 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 08:17:12 crc kubenswrapper[4520]: I0130 08:17:12.946764 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-67hpf" event={"ID":"820453ca-187f-4302-a988-f156b62ae2ac","Type":"ContainerStarted","Data":"6ff67f643c1ae0d93667fca28704fbe27b369aefecb3df443e4e2fe9961e7145"} Jan 30 08:17:13 crc kubenswrapper[4520]: I0130 08:17:13.956722 4520 generic.go:334] "Generic (PLEG): container finished" podID="820453ca-187f-4302-a988-f156b62ae2ac" containerID="6ff67f643c1ae0d93667fca28704fbe27b369aefecb3df443e4e2fe9961e7145" exitCode=0 Jan 30 08:17:13 crc kubenswrapper[4520]: I0130 08:17:13.956745 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-67hpf" event={"ID":"820453ca-187f-4302-a988-f156b62ae2ac","Type":"ContainerDied","Data":"6ff67f643c1ae0d93667fca28704fbe27b369aefecb3df443e4e2fe9961e7145"} Jan 30 08:17:14 crc kubenswrapper[4520]: I0130 08:17:14.686007 4520 scope.go:117] "RemoveContainer" containerID="5bf99a70e835280e041759c379d0b5c1d28d20267306cf6c29f1e0b2bb51fcbb" Jan 30 08:17:14 crc kubenswrapper[4520]: E0130 08:17:14.686778 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:17:14 crc kubenswrapper[4520]: I0130 08:17:14.966393 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-67hpf" event={"ID":"820453ca-187f-4302-a988-f156b62ae2ac","Type":"ContainerStarted","Data":"7e49708f8d3943577ea365069f71b2464eeb25bcb2313f50d19ed51bf12bdfe4"} Jan 30 08:17:14 crc kubenswrapper[4520]: I0130 08:17:14.986986 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-67hpf" podStartSLOduration=2.412881443 podStartE2EDuration="4.986964318s" podCreationTimestamp="2026-01-30 08:17:10 +0000 UTC" firstStartedPulling="2026-01-30 08:17:11.923847621 +0000 UTC m=+5545.552199802" lastFinishedPulling="2026-01-30 08:17:14.497930496 +0000 UTC m=+5548.126282677" observedRunningTime="2026-01-30 08:17:14.985505676 +0000 UTC m=+5548.613857847" watchObservedRunningTime="2026-01-30 08:17:14.986964318 +0000 UTC m=+5548.615316499" Jan 30 08:17:20 crc kubenswrapper[4520]: I0130 08:17:20.662813 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-67hpf" Jan 30 08:17:20 crc kubenswrapper[4520]: I0130 08:17:20.663335 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-67hpf" Jan 30 08:17:20 crc kubenswrapper[4520]: I0130 08:17:20.698277 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-67hpf" Jan 30 08:17:21 crc kubenswrapper[4520]: I0130 08:17:21.052449 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-67hpf" Jan 30 08:17:21 crc kubenswrapper[4520]: I0130 08:17:21.099508 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-67hpf"] Jan 30 08:17:23 crc kubenswrapper[4520]: I0130 08:17:23.036589 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-67hpf" podUID="820453ca-187f-4302-a988-f156b62ae2ac" containerName="registry-server" containerID="cri-o://7e49708f8d3943577ea365069f71b2464eeb25bcb2313f50d19ed51bf12bdfe4" gracePeriod=2 Jan 30 08:17:23 crc kubenswrapper[4520]: I0130 08:17:23.573471 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-67hpf" Jan 30 08:17:23 crc kubenswrapper[4520]: I0130 08:17:23.672735 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/820453ca-187f-4302-a988-f156b62ae2ac-catalog-content\") pod \"820453ca-187f-4302-a988-f156b62ae2ac\" (UID: \"820453ca-187f-4302-a988-f156b62ae2ac\") " Jan 30 08:17:23 crc kubenswrapper[4520]: I0130 08:17:23.672886 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/820453ca-187f-4302-a988-f156b62ae2ac-utilities\") pod \"820453ca-187f-4302-a988-f156b62ae2ac\" (UID: \"820453ca-187f-4302-a988-f156b62ae2ac\") " Jan 30 08:17:23 crc kubenswrapper[4520]: I0130 08:17:23.673093 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzqfh\" (UniqueName: \"kubernetes.io/projected/820453ca-187f-4302-a988-f156b62ae2ac-kube-api-access-qzqfh\") pod \"820453ca-187f-4302-a988-f156b62ae2ac\" (UID: \"820453ca-187f-4302-a988-f156b62ae2ac\") " Jan 30 08:17:23 crc kubenswrapper[4520]: I0130 08:17:23.674095 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/820453ca-187f-4302-a988-f156b62ae2ac-utilities" (OuterVolumeSpecName: "utilities") pod "820453ca-187f-4302-a988-f156b62ae2ac" (UID: "820453ca-187f-4302-a988-f156b62ae2ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:17:23 crc kubenswrapper[4520]: I0130 08:17:23.674993 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/820453ca-187f-4302-a988-f156b62ae2ac-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:17:23 crc kubenswrapper[4520]: I0130 08:17:23.682076 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/820453ca-187f-4302-a988-f156b62ae2ac-kube-api-access-qzqfh" (OuterVolumeSpecName: "kube-api-access-qzqfh") pod "820453ca-187f-4302-a988-f156b62ae2ac" (UID: "820453ca-187f-4302-a988-f156b62ae2ac"). InnerVolumeSpecName "kube-api-access-qzqfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:17:23 crc kubenswrapper[4520]: I0130 08:17:23.694613 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/820453ca-187f-4302-a988-f156b62ae2ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "820453ca-187f-4302-a988-f156b62ae2ac" (UID: "820453ca-187f-4302-a988-f156b62ae2ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:17:23 crc kubenswrapper[4520]: I0130 08:17:23.777567 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzqfh\" (UniqueName: \"kubernetes.io/projected/820453ca-187f-4302-a988-f156b62ae2ac-kube-api-access-qzqfh\") on node \"crc\" DevicePath \"\"" Jan 30 08:17:23 crc kubenswrapper[4520]: I0130 08:17:23.777745 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/820453ca-187f-4302-a988-f156b62ae2ac-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:17:24 crc kubenswrapper[4520]: I0130 08:17:24.047595 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-67hpf" Jan 30 08:17:24 crc kubenswrapper[4520]: I0130 08:17:24.048113 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-67hpf" event={"ID":"820453ca-187f-4302-a988-f156b62ae2ac","Type":"ContainerDied","Data":"7e49708f8d3943577ea365069f71b2464eeb25bcb2313f50d19ed51bf12bdfe4"} Jan 30 08:17:24 crc kubenswrapper[4520]: I0130 08:17:24.048252 4520 scope.go:117] "RemoveContainer" containerID="7e49708f8d3943577ea365069f71b2464eeb25bcb2313f50d19ed51bf12bdfe4" Jan 30 08:17:24 crc kubenswrapper[4520]: I0130 08:17:24.048550 4520 generic.go:334] "Generic (PLEG): container finished" podID="820453ca-187f-4302-a988-f156b62ae2ac" containerID="7e49708f8d3943577ea365069f71b2464eeb25bcb2313f50d19ed51bf12bdfe4" exitCode=0 Jan 30 08:17:24 crc kubenswrapper[4520]: I0130 08:17:24.048663 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-67hpf" event={"ID":"820453ca-187f-4302-a988-f156b62ae2ac","Type":"ContainerDied","Data":"6142400de27f6af4fa618e1f788b1da3e2f0927e7352cca1f5e78640934431d3"} Jan 30 08:17:24 crc kubenswrapper[4520]: I0130 08:17:24.085615 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-67hpf"] Jan 30 08:17:24 crc kubenswrapper[4520]: I0130 08:17:24.086604 4520 scope.go:117] "RemoveContainer" containerID="6ff67f643c1ae0d93667fca28704fbe27b369aefecb3df443e4e2fe9961e7145" Jan 30 08:17:24 crc kubenswrapper[4520]: I0130 08:17:24.091648 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-67hpf"] Jan 30 08:17:24 crc kubenswrapper[4520]: I0130 08:17:24.127052 4520 scope.go:117] "RemoveContainer" containerID="990c4e7f202146fa28ea9604a0f7db9a3c57c173e60c07ef38b499778696c26c" Jan 30 08:17:24 crc kubenswrapper[4520]: I0130 08:17:24.145025 4520 scope.go:117] "RemoveContainer" containerID="7e49708f8d3943577ea365069f71b2464eeb25bcb2313f50d19ed51bf12bdfe4" Jan 30 08:17:24 crc kubenswrapper[4520]: E0130 08:17:24.145489 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e49708f8d3943577ea365069f71b2464eeb25bcb2313f50d19ed51bf12bdfe4\": container with ID starting with 7e49708f8d3943577ea365069f71b2464eeb25bcb2313f50d19ed51bf12bdfe4 not found: ID does not exist" containerID="7e49708f8d3943577ea365069f71b2464eeb25bcb2313f50d19ed51bf12bdfe4" Jan 30 08:17:24 crc kubenswrapper[4520]: I0130 08:17:24.145543 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e49708f8d3943577ea365069f71b2464eeb25bcb2313f50d19ed51bf12bdfe4"} err="failed to get container status \"7e49708f8d3943577ea365069f71b2464eeb25bcb2313f50d19ed51bf12bdfe4\": rpc error: code = NotFound desc = could not find container \"7e49708f8d3943577ea365069f71b2464eeb25bcb2313f50d19ed51bf12bdfe4\": container with ID starting with 7e49708f8d3943577ea365069f71b2464eeb25bcb2313f50d19ed51bf12bdfe4 not found: ID does not exist" Jan 30 08:17:24 crc kubenswrapper[4520]: I0130 08:17:24.145565 4520 scope.go:117] "RemoveContainer" containerID="6ff67f643c1ae0d93667fca28704fbe27b369aefecb3df443e4e2fe9961e7145" Jan 30 08:17:24 crc kubenswrapper[4520]: E0130 08:17:24.145924 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ff67f643c1ae0d93667fca28704fbe27b369aefecb3df443e4e2fe9961e7145\": container with ID starting with 6ff67f643c1ae0d93667fca28704fbe27b369aefecb3df443e4e2fe9961e7145 not found: ID does not exist" containerID="6ff67f643c1ae0d93667fca28704fbe27b369aefecb3df443e4e2fe9961e7145" Jan 30 08:17:24 crc kubenswrapper[4520]: I0130 08:17:24.145990 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ff67f643c1ae0d93667fca28704fbe27b369aefecb3df443e4e2fe9961e7145"} err="failed to get container status \"6ff67f643c1ae0d93667fca28704fbe27b369aefecb3df443e4e2fe9961e7145\": rpc error: code = NotFound desc = could not find container \"6ff67f643c1ae0d93667fca28704fbe27b369aefecb3df443e4e2fe9961e7145\": container with ID starting with 6ff67f643c1ae0d93667fca28704fbe27b369aefecb3df443e4e2fe9961e7145 not found: ID does not exist" Jan 30 08:17:24 crc kubenswrapper[4520]: I0130 08:17:24.146025 4520 scope.go:117] "RemoveContainer" containerID="990c4e7f202146fa28ea9604a0f7db9a3c57c173e60c07ef38b499778696c26c" Jan 30 08:17:24 crc kubenswrapper[4520]: E0130 08:17:24.146433 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"990c4e7f202146fa28ea9604a0f7db9a3c57c173e60c07ef38b499778696c26c\": container with ID starting with 990c4e7f202146fa28ea9604a0f7db9a3c57c173e60c07ef38b499778696c26c not found: ID does not exist" containerID="990c4e7f202146fa28ea9604a0f7db9a3c57c173e60c07ef38b499778696c26c" Jan 30 08:17:24 crc kubenswrapper[4520]: I0130 08:17:24.146461 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"990c4e7f202146fa28ea9604a0f7db9a3c57c173e60c07ef38b499778696c26c"} err="failed to get container status \"990c4e7f202146fa28ea9604a0f7db9a3c57c173e60c07ef38b499778696c26c\": rpc error: code = NotFound desc = could not find container \"990c4e7f202146fa28ea9604a0f7db9a3c57c173e60c07ef38b499778696c26c\": container with ID starting with 990c4e7f202146fa28ea9604a0f7db9a3c57c173e60c07ef38b499778696c26c not found: ID does not exist" Jan 30 08:17:24 crc kubenswrapper[4520]: I0130 08:17:24.695300 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="820453ca-187f-4302-a988-f156b62ae2ac" path="/var/lib/kubelet/pods/820453ca-187f-4302-a988-f156b62ae2ac/volumes" Jan 30 08:17:28 crc kubenswrapper[4520]: I0130 08:17:28.686047 4520 scope.go:117] "RemoveContainer" containerID="5bf99a70e835280e041759c379d0b5c1d28d20267306cf6c29f1e0b2bb51fcbb" Jan 30 08:17:28 crc kubenswrapper[4520]: E0130 08:17:28.686749 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:17:42 crc kubenswrapper[4520]: I0130 08:17:42.685952 4520 scope.go:117] "RemoveContainer" containerID="5bf99a70e835280e041759c379d0b5c1d28d20267306cf6c29f1e0b2bb51fcbb" Jan 30 08:17:42 crc kubenswrapper[4520]: E0130 08:17:42.687600 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:17:53 crc kubenswrapper[4520]: I0130 08:17:53.686498 4520 scope.go:117] "RemoveContainer" containerID="5bf99a70e835280e041759c379d0b5c1d28d20267306cf6c29f1e0b2bb51fcbb" Jan 30 08:17:53 crc kubenswrapper[4520]: E0130 08:17:53.687374 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:18:06 crc kubenswrapper[4520]: I0130 08:18:06.690308 4520 scope.go:117] "RemoveContainer" containerID="5bf99a70e835280e041759c379d0b5c1d28d20267306cf6c29f1e0b2bb51fcbb" Jan 30 08:18:07 crc kubenswrapper[4520]: I0130 08:18:07.432794 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerStarted","Data":"a144a84e67748e984e28c7ea0d782f445f9dab8d6b3c1d91475dbed80ad97761"} Jan 30 08:18:20 crc kubenswrapper[4520]: I0130 08:18:20.672292 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kdm5m"] Jan 30 08:18:20 crc kubenswrapper[4520]: E0130 08:18:20.673712 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="820453ca-187f-4302-a988-f156b62ae2ac" containerName="extract-content" Jan 30 08:18:20 crc kubenswrapper[4520]: I0130 08:18:20.673730 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="820453ca-187f-4302-a988-f156b62ae2ac" containerName="extract-content" Jan 30 08:18:20 crc kubenswrapper[4520]: E0130 08:18:20.673793 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="820453ca-187f-4302-a988-f156b62ae2ac" containerName="extract-utilities" Jan 30 08:18:20 crc kubenswrapper[4520]: I0130 08:18:20.673800 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="820453ca-187f-4302-a988-f156b62ae2ac" containerName="extract-utilities" Jan 30 08:18:20 crc kubenswrapper[4520]: E0130 08:18:20.673841 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="820453ca-187f-4302-a988-f156b62ae2ac" containerName="registry-server" Jan 30 08:18:20 crc kubenswrapper[4520]: I0130 08:18:20.673848 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="820453ca-187f-4302-a988-f156b62ae2ac" containerName="registry-server" Jan 30 08:18:20 crc kubenswrapper[4520]: I0130 08:18:20.674118 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="820453ca-187f-4302-a988-f156b62ae2ac" containerName="registry-server" Jan 30 08:18:20 crc kubenswrapper[4520]: I0130 08:18:20.676654 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kdm5m" Jan 30 08:18:20 crc kubenswrapper[4520]: I0130 08:18:20.678376 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kdm5m"] Jan 30 08:18:20 crc kubenswrapper[4520]: I0130 08:18:20.811993 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecace595-31cd-4f02-b747-b8ab9d26f185-catalog-content\") pod \"redhat-operators-kdm5m\" (UID: \"ecace595-31cd-4f02-b747-b8ab9d26f185\") " pod="openshift-marketplace/redhat-operators-kdm5m" Jan 30 08:18:20 crc kubenswrapper[4520]: I0130 08:18:20.812056 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrzq8\" (UniqueName: \"kubernetes.io/projected/ecace595-31cd-4f02-b747-b8ab9d26f185-kube-api-access-hrzq8\") pod \"redhat-operators-kdm5m\" (UID: \"ecace595-31cd-4f02-b747-b8ab9d26f185\") " pod="openshift-marketplace/redhat-operators-kdm5m" Jan 30 08:18:20 crc kubenswrapper[4520]: I0130 08:18:20.812089 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecace595-31cd-4f02-b747-b8ab9d26f185-utilities\") pod \"redhat-operators-kdm5m\" (UID: \"ecace595-31cd-4f02-b747-b8ab9d26f185\") " pod="openshift-marketplace/redhat-operators-kdm5m" Jan 30 08:18:20 crc kubenswrapper[4520]: I0130 08:18:20.914207 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecace595-31cd-4f02-b747-b8ab9d26f185-catalog-content\") pod \"redhat-operators-kdm5m\" (UID: \"ecace595-31cd-4f02-b747-b8ab9d26f185\") " pod="openshift-marketplace/redhat-operators-kdm5m" Jan 30 08:18:20 crc kubenswrapper[4520]: I0130 08:18:20.914430 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrzq8\" (UniqueName: \"kubernetes.io/projected/ecace595-31cd-4f02-b747-b8ab9d26f185-kube-api-access-hrzq8\") pod \"redhat-operators-kdm5m\" (UID: \"ecace595-31cd-4f02-b747-b8ab9d26f185\") " pod="openshift-marketplace/redhat-operators-kdm5m" Jan 30 08:18:20 crc kubenswrapper[4520]: I0130 08:18:20.914567 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecace595-31cd-4f02-b747-b8ab9d26f185-utilities\") pod \"redhat-operators-kdm5m\" (UID: \"ecace595-31cd-4f02-b747-b8ab9d26f185\") " pod="openshift-marketplace/redhat-operators-kdm5m" Jan 30 08:18:20 crc kubenswrapper[4520]: I0130 08:18:20.914796 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecace595-31cd-4f02-b747-b8ab9d26f185-catalog-content\") pod \"redhat-operators-kdm5m\" (UID: \"ecace595-31cd-4f02-b747-b8ab9d26f185\") " pod="openshift-marketplace/redhat-operators-kdm5m" Jan 30 08:18:20 crc kubenswrapper[4520]: I0130 08:18:20.915255 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecace595-31cd-4f02-b747-b8ab9d26f185-utilities\") pod \"redhat-operators-kdm5m\" (UID: \"ecace595-31cd-4f02-b747-b8ab9d26f185\") " pod="openshift-marketplace/redhat-operators-kdm5m" Jan 30 08:18:20 crc kubenswrapper[4520]: I0130 08:18:20.934607 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrzq8\" (UniqueName: \"kubernetes.io/projected/ecace595-31cd-4f02-b747-b8ab9d26f185-kube-api-access-hrzq8\") pod \"redhat-operators-kdm5m\" (UID: \"ecace595-31cd-4f02-b747-b8ab9d26f185\") " pod="openshift-marketplace/redhat-operators-kdm5m" Jan 30 08:18:21 crc kubenswrapper[4520]: I0130 08:18:21.007602 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kdm5m" Jan 30 08:18:21 crc kubenswrapper[4520]: I0130 08:18:21.532975 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kdm5m"] Jan 30 08:18:21 crc kubenswrapper[4520]: I0130 08:18:21.574322 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kdm5m" event={"ID":"ecace595-31cd-4f02-b747-b8ab9d26f185","Type":"ContainerStarted","Data":"02cb31572ee6fe149d225299a1a6bc67fd256b93222b3f417301378974d4d4f0"} Jan 30 08:18:22 crc kubenswrapper[4520]: I0130 08:18:22.581548 4520 generic.go:334] "Generic (PLEG): container finished" podID="ecace595-31cd-4f02-b747-b8ab9d26f185" containerID="2c26d0e3bcb3b1a23fda1f4136f407c41522513de74e8318ccac055a59a0d6f4" exitCode=0 Jan 30 08:18:22 crc kubenswrapper[4520]: I0130 08:18:22.581586 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kdm5m" event={"ID":"ecace595-31cd-4f02-b747-b8ab9d26f185","Type":"ContainerDied","Data":"2c26d0e3bcb3b1a23fda1f4136f407c41522513de74e8318ccac055a59a0d6f4"} Jan 30 08:18:23 crc kubenswrapper[4520]: I0130 08:18:23.592329 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kdm5m" event={"ID":"ecace595-31cd-4f02-b747-b8ab9d26f185","Type":"ContainerStarted","Data":"d683fa747d5b2f9303ce56cfa207ea6a3548a73ff02165ed324dfe0094188b0d"} Jan 30 08:18:25 crc kubenswrapper[4520]: I0130 08:18:25.612726 4520 generic.go:334] "Generic (PLEG): container finished" podID="ecace595-31cd-4f02-b747-b8ab9d26f185" containerID="d683fa747d5b2f9303ce56cfa207ea6a3548a73ff02165ed324dfe0094188b0d" exitCode=0 Jan 30 08:18:25 crc kubenswrapper[4520]: I0130 08:18:25.612782 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kdm5m" event={"ID":"ecace595-31cd-4f02-b747-b8ab9d26f185","Type":"ContainerDied","Data":"d683fa747d5b2f9303ce56cfa207ea6a3548a73ff02165ed324dfe0094188b0d"} Jan 30 08:18:26 crc kubenswrapper[4520]: I0130 08:18:26.624555 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kdm5m" event={"ID":"ecace595-31cd-4f02-b747-b8ab9d26f185","Type":"ContainerStarted","Data":"2e319768d6595d1d212b21448a2b6cf74509891816201baf4647e899e6f68aa0"} Jan 30 08:18:26 crc kubenswrapper[4520]: I0130 08:18:26.650005 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kdm5m" podStartSLOduration=3.119970598 podStartE2EDuration="6.649983608s" podCreationTimestamp="2026-01-30 08:18:20 +0000 UTC" firstStartedPulling="2026-01-30 08:18:22.583445854 +0000 UTC m=+5616.211798035" lastFinishedPulling="2026-01-30 08:18:26.113458863 +0000 UTC m=+5619.741811045" observedRunningTime="2026-01-30 08:18:26.64293258 +0000 UTC m=+5620.271284762" watchObservedRunningTime="2026-01-30 08:18:26.649983608 +0000 UTC m=+5620.278335789" Jan 30 08:18:31 crc kubenswrapper[4520]: I0130 08:18:31.008093 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kdm5m" Jan 30 08:18:31 crc kubenswrapper[4520]: I0130 08:18:31.008794 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kdm5m" Jan 30 08:18:32 crc kubenswrapper[4520]: I0130 08:18:32.044444 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kdm5m" podUID="ecace595-31cd-4f02-b747-b8ab9d26f185" containerName="registry-server" probeResult="failure" output=< Jan 30 08:18:32 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 08:18:32 crc kubenswrapper[4520]: > Jan 30 08:18:41 crc kubenswrapper[4520]: I0130 08:18:41.047209 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kdm5m" Jan 30 08:18:41 crc kubenswrapper[4520]: I0130 08:18:41.096277 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kdm5m" Jan 30 08:18:41 crc kubenswrapper[4520]: I0130 08:18:41.307010 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kdm5m"] Jan 30 08:18:42 crc kubenswrapper[4520]: I0130 08:18:42.782931 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kdm5m" podUID="ecace595-31cd-4f02-b747-b8ab9d26f185" containerName="registry-server" containerID="cri-o://2e319768d6595d1d212b21448a2b6cf74509891816201baf4647e899e6f68aa0" gracePeriod=2 Jan 30 08:18:43 crc kubenswrapper[4520]: I0130 08:18:43.408802 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kdm5m" Jan 30 08:18:43 crc kubenswrapper[4520]: I0130 08:18:43.533139 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecace595-31cd-4f02-b747-b8ab9d26f185-catalog-content\") pod \"ecace595-31cd-4f02-b747-b8ab9d26f185\" (UID: \"ecace595-31cd-4f02-b747-b8ab9d26f185\") " Jan 30 08:18:43 crc kubenswrapper[4520]: I0130 08:18:43.533308 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecace595-31cd-4f02-b747-b8ab9d26f185-utilities\") pod \"ecace595-31cd-4f02-b747-b8ab9d26f185\" (UID: \"ecace595-31cd-4f02-b747-b8ab9d26f185\") " Jan 30 08:18:43 crc kubenswrapper[4520]: I0130 08:18:43.533599 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrzq8\" (UniqueName: \"kubernetes.io/projected/ecace595-31cd-4f02-b747-b8ab9d26f185-kube-api-access-hrzq8\") pod \"ecace595-31cd-4f02-b747-b8ab9d26f185\" (UID: \"ecace595-31cd-4f02-b747-b8ab9d26f185\") " Jan 30 08:18:43 crc kubenswrapper[4520]: I0130 08:18:43.534695 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecace595-31cd-4f02-b747-b8ab9d26f185-utilities" (OuterVolumeSpecName: "utilities") pod "ecace595-31cd-4f02-b747-b8ab9d26f185" (UID: "ecace595-31cd-4f02-b747-b8ab9d26f185"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:18:43 crc kubenswrapper[4520]: I0130 08:18:43.541851 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecace595-31cd-4f02-b747-b8ab9d26f185-kube-api-access-hrzq8" (OuterVolumeSpecName: "kube-api-access-hrzq8") pod "ecace595-31cd-4f02-b747-b8ab9d26f185" (UID: "ecace595-31cd-4f02-b747-b8ab9d26f185"). InnerVolumeSpecName "kube-api-access-hrzq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:18:43 crc kubenswrapper[4520]: I0130 08:18:43.632717 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecace595-31cd-4f02-b747-b8ab9d26f185-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ecace595-31cd-4f02-b747-b8ab9d26f185" (UID: "ecace595-31cd-4f02-b747-b8ab9d26f185"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:18:43 crc kubenswrapper[4520]: I0130 08:18:43.637653 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrzq8\" (UniqueName: \"kubernetes.io/projected/ecace595-31cd-4f02-b747-b8ab9d26f185-kube-api-access-hrzq8\") on node \"crc\" DevicePath \"\"" Jan 30 08:18:43 crc kubenswrapper[4520]: I0130 08:18:43.637690 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecace595-31cd-4f02-b747-b8ab9d26f185-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:18:43 crc kubenswrapper[4520]: I0130 08:18:43.637704 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecace595-31cd-4f02-b747-b8ab9d26f185-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:18:43 crc kubenswrapper[4520]: I0130 08:18:43.790846 4520 generic.go:334] "Generic (PLEG): container finished" podID="ecace595-31cd-4f02-b747-b8ab9d26f185" containerID="2e319768d6595d1d212b21448a2b6cf74509891816201baf4647e899e6f68aa0" exitCode=0 Jan 30 08:18:43 crc kubenswrapper[4520]: I0130 08:18:43.790905 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kdm5m" Jan 30 08:18:43 crc kubenswrapper[4520]: I0130 08:18:43.790927 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kdm5m" event={"ID":"ecace595-31cd-4f02-b747-b8ab9d26f185","Type":"ContainerDied","Data":"2e319768d6595d1d212b21448a2b6cf74509891816201baf4647e899e6f68aa0"} Jan 30 08:18:43 crc kubenswrapper[4520]: I0130 08:18:43.791273 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kdm5m" event={"ID":"ecace595-31cd-4f02-b747-b8ab9d26f185","Type":"ContainerDied","Data":"02cb31572ee6fe149d225299a1a6bc67fd256b93222b3f417301378974d4d4f0"} Jan 30 08:18:43 crc kubenswrapper[4520]: I0130 08:18:43.791774 4520 scope.go:117] "RemoveContainer" containerID="2e319768d6595d1d212b21448a2b6cf74509891816201baf4647e899e6f68aa0" Jan 30 08:18:43 crc kubenswrapper[4520]: I0130 08:18:43.816861 4520 scope.go:117] "RemoveContainer" containerID="d683fa747d5b2f9303ce56cfa207ea6a3548a73ff02165ed324dfe0094188b0d" Jan 30 08:18:43 crc kubenswrapper[4520]: I0130 08:18:43.834209 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kdm5m"] Jan 30 08:18:43 crc kubenswrapper[4520]: I0130 08:18:43.851209 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kdm5m"] Jan 30 08:18:43 crc kubenswrapper[4520]: I0130 08:18:43.852615 4520 scope.go:117] "RemoveContainer" containerID="2c26d0e3bcb3b1a23fda1f4136f407c41522513de74e8318ccac055a59a0d6f4" Jan 30 08:18:43 crc kubenswrapper[4520]: I0130 08:18:43.881287 4520 scope.go:117] "RemoveContainer" containerID="2e319768d6595d1d212b21448a2b6cf74509891816201baf4647e899e6f68aa0" Jan 30 08:18:43 crc kubenswrapper[4520]: E0130 08:18:43.882772 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e319768d6595d1d212b21448a2b6cf74509891816201baf4647e899e6f68aa0\": container with ID starting with 2e319768d6595d1d212b21448a2b6cf74509891816201baf4647e899e6f68aa0 not found: ID does not exist" containerID="2e319768d6595d1d212b21448a2b6cf74509891816201baf4647e899e6f68aa0" Jan 30 08:18:43 crc kubenswrapper[4520]: I0130 08:18:43.883447 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e319768d6595d1d212b21448a2b6cf74509891816201baf4647e899e6f68aa0"} err="failed to get container status \"2e319768d6595d1d212b21448a2b6cf74509891816201baf4647e899e6f68aa0\": rpc error: code = NotFound desc = could not find container \"2e319768d6595d1d212b21448a2b6cf74509891816201baf4647e899e6f68aa0\": container with ID starting with 2e319768d6595d1d212b21448a2b6cf74509891816201baf4647e899e6f68aa0 not found: ID does not exist" Jan 30 08:18:43 crc kubenswrapper[4520]: I0130 08:18:43.883491 4520 scope.go:117] "RemoveContainer" containerID="d683fa747d5b2f9303ce56cfa207ea6a3548a73ff02165ed324dfe0094188b0d" Jan 30 08:18:43 crc kubenswrapper[4520]: E0130 08:18:43.883993 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d683fa747d5b2f9303ce56cfa207ea6a3548a73ff02165ed324dfe0094188b0d\": container with ID starting with d683fa747d5b2f9303ce56cfa207ea6a3548a73ff02165ed324dfe0094188b0d not found: ID does not exist" containerID="d683fa747d5b2f9303ce56cfa207ea6a3548a73ff02165ed324dfe0094188b0d" Jan 30 08:18:43 crc kubenswrapper[4520]: I0130 08:18:43.884023 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d683fa747d5b2f9303ce56cfa207ea6a3548a73ff02165ed324dfe0094188b0d"} err="failed to get container status \"d683fa747d5b2f9303ce56cfa207ea6a3548a73ff02165ed324dfe0094188b0d\": rpc error: code = NotFound desc = could not find container \"d683fa747d5b2f9303ce56cfa207ea6a3548a73ff02165ed324dfe0094188b0d\": container with ID starting with d683fa747d5b2f9303ce56cfa207ea6a3548a73ff02165ed324dfe0094188b0d not found: ID does not exist" Jan 30 08:18:43 crc kubenswrapper[4520]: I0130 08:18:43.884048 4520 scope.go:117] "RemoveContainer" containerID="2c26d0e3bcb3b1a23fda1f4136f407c41522513de74e8318ccac055a59a0d6f4" Jan 30 08:18:43 crc kubenswrapper[4520]: E0130 08:18:43.884303 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c26d0e3bcb3b1a23fda1f4136f407c41522513de74e8318ccac055a59a0d6f4\": container with ID starting with 2c26d0e3bcb3b1a23fda1f4136f407c41522513de74e8318ccac055a59a0d6f4 not found: ID does not exist" containerID="2c26d0e3bcb3b1a23fda1f4136f407c41522513de74e8318ccac055a59a0d6f4" Jan 30 08:18:43 crc kubenswrapper[4520]: I0130 08:18:43.884321 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c26d0e3bcb3b1a23fda1f4136f407c41522513de74e8318ccac055a59a0d6f4"} err="failed to get container status \"2c26d0e3bcb3b1a23fda1f4136f407c41522513de74e8318ccac055a59a0d6f4\": rpc error: code = NotFound desc = could not find container \"2c26d0e3bcb3b1a23fda1f4136f407c41522513de74e8318ccac055a59a0d6f4\": container with ID starting with 2c26d0e3bcb3b1a23fda1f4136f407c41522513de74e8318ccac055a59a0d6f4 not found: ID does not exist" Jan 30 08:18:44 crc kubenswrapper[4520]: I0130 08:18:44.694487 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecace595-31cd-4f02-b747-b8ab9d26f185" path="/var/lib/kubelet/pods/ecace595-31cd-4f02-b747-b8ab9d26f185/volumes" Jan 30 08:19:05 crc kubenswrapper[4520]: I0130 08:19:05.037617 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h6ngw"] Jan 30 08:19:05 crc kubenswrapper[4520]: E0130 08:19:05.038425 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecace595-31cd-4f02-b747-b8ab9d26f185" containerName="extract-utilities" Jan 30 08:19:05 crc kubenswrapper[4520]: I0130 08:19:05.038444 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecace595-31cd-4f02-b747-b8ab9d26f185" containerName="extract-utilities" Jan 30 08:19:05 crc kubenswrapper[4520]: E0130 08:19:05.038472 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecace595-31cd-4f02-b747-b8ab9d26f185" containerName="extract-content" Jan 30 08:19:05 crc kubenswrapper[4520]: I0130 08:19:05.038479 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecace595-31cd-4f02-b747-b8ab9d26f185" containerName="extract-content" Jan 30 08:19:05 crc kubenswrapper[4520]: E0130 08:19:05.038489 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecace595-31cd-4f02-b747-b8ab9d26f185" containerName="registry-server" Jan 30 08:19:05 crc kubenswrapper[4520]: I0130 08:19:05.038495 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecace595-31cd-4f02-b747-b8ab9d26f185" containerName="registry-server" Jan 30 08:19:05 crc kubenswrapper[4520]: I0130 08:19:05.038790 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecace595-31cd-4f02-b747-b8ab9d26f185" containerName="registry-server" Jan 30 08:19:05 crc kubenswrapper[4520]: I0130 08:19:05.040191 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h6ngw" Jan 30 08:19:05 crc kubenswrapper[4520]: I0130 08:19:05.068210 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h6ngw"] Jan 30 08:19:05 crc kubenswrapper[4520]: I0130 08:19:05.095415 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d521314e-fadc-4bbd-9592-01e4892cd188-catalog-content\") pod \"community-operators-h6ngw\" (UID: \"d521314e-fadc-4bbd-9592-01e4892cd188\") " pod="openshift-marketplace/community-operators-h6ngw" Jan 30 08:19:05 crc kubenswrapper[4520]: I0130 08:19:05.095642 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdpzn\" (UniqueName: \"kubernetes.io/projected/d521314e-fadc-4bbd-9592-01e4892cd188-kube-api-access-rdpzn\") pod \"community-operators-h6ngw\" (UID: \"d521314e-fadc-4bbd-9592-01e4892cd188\") " pod="openshift-marketplace/community-operators-h6ngw" Jan 30 08:19:05 crc kubenswrapper[4520]: I0130 08:19:05.095788 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d521314e-fadc-4bbd-9592-01e4892cd188-utilities\") pod \"community-operators-h6ngw\" (UID: \"d521314e-fadc-4bbd-9592-01e4892cd188\") " pod="openshift-marketplace/community-operators-h6ngw" Jan 30 08:19:05 crc kubenswrapper[4520]: I0130 08:19:05.196375 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d521314e-fadc-4bbd-9592-01e4892cd188-utilities\") pod \"community-operators-h6ngw\" (UID: \"d521314e-fadc-4bbd-9592-01e4892cd188\") " pod="openshift-marketplace/community-operators-h6ngw" Jan 30 08:19:05 crc kubenswrapper[4520]: I0130 08:19:05.196483 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d521314e-fadc-4bbd-9592-01e4892cd188-catalog-content\") pod \"community-operators-h6ngw\" (UID: \"d521314e-fadc-4bbd-9592-01e4892cd188\") " pod="openshift-marketplace/community-operators-h6ngw" Jan 30 08:19:05 crc kubenswrapper[4520]: I0130 08:19:05.196577 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdpzn\" (UniqueName: \"kubernetes.io/projected/d521314e-fadc-4bbd-9592-01e4892cd188-kube-api-access-rdpzn\") pod \"community-operators-h6ngw\" (UID: \"d521314e-fadc-4bbd-9592-01e4892cd188\") " pod="openshift-marketplace/community-operators-h6ngw" Jan 30 08:19:05 crc kubenswrapper[4520]: I0130 08:19:05.196894 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d521314e-fadc-4bbd-9592-01e4892cd188-utilities\") pod \"community-operators-h6ngw\" (UID: \"d521314e-fadc-4bbd-9592-01e4892cd188\") " pod="openshift-marketplace/community-operators-h6ngw" Jan 30 08:19:05 crc kubenswrapper[4520]: I0130 08:19:05.196959 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d521314e-fadc-4bbd-9592-01e4892cd188-catalog-content\") pod \"community-operators-h6ngw\" (UID: \"d521314e-fadc-4bbd-9592-01e4892cd188\") " pod="openshift-marketplace/community-operators-h6ngw" Jan 30 08:19:05 crc kubenswrapper[4520]: I0130 08:19:05.216589 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdpzn\" (UniqueName: \"kubernetes.io/projected/d521314e-fadc-4bbd-9592-01e4892cd188-kube-api-access-rdpzn\") pod \"community-operators-h6ngw\" (UID: \"d521314e-fadc-4bbd-9592-01e4892cd188\") " pod="openshift-marketplace/community-operators-h6ngw" Jan 30 08:19:05 crc kubenswrapper[4520]: I0130 08:19:05.359013 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h6ngw" Jan 30 08:19:05 crc kubenswrapper[4520]: I0130 08:19:05.786408 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h6ngw"] Jan 30 08:19:05 crc kubenswrapper[4520]: I0130 08:19:05.978188 4520 generic.go:334] "Generic (PLEG): container finished" podID="d521314e-fadc-4bbd-9592-01e4892cd188" containerID="9f3fc79e371e0fdc595ab5126b884c3ad941ea67af37b6642643fe8cae8d6443" exitCode=0 Jan 30 08:19:05 crc kubenswrapper[4520]: I0130 08:19:05.978259 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6ngw" event={"ID":"d521314e-fadc-4bbd-9592-01e4892cd188","Type":"ContainerDied","Data":"9f3fc79e371e0fdc595ab5126b884c3ad941ea67af37b6642643fe8cae8d6443"} Jan 30 08:19:05 crc kubenswrapper[4520]: I0130 08:19:05.978295 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6ngw" event={"ID":"d521314e-fadc-4bbd-9592-01e4892cd188","Type":"ContainerStarted","Data":"34dd8222f26371dadf6435c74a0497d510f11ea9b37a8025f09b6617455f6e47"} Jan 30 08:19:06 crc kubenswrapper[4520]: I0130 08:19:06.988305 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6ngw" event={"ID":"d521314e-fadc-4bbd-9592-01e4892cd188","Type":"ContainerStarted","Data":"ad43ee4a2303b73911b66bf42265bdee77ccc1a3171c889e29b40f384bc7c8e7"} Jan 30 08:19:07 crc kubenswrapper[4520]: I0130 08:19:07.997004 4520 generic.go:334] "Generic (PLEG): container finished" podID="d521314e-fadc-4bbd-9592-01e4892cd188" containerID="ad43ee4a2303b73911b66bf42265bdee77ccc1a3171c889e29b40f384bc7c8e7" exitCode=0 Jan 30 08:19:07 crc kubenswrapper[4520]: I0130 08:19:07.997126 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6ngw" event={"ID":"d521314e-fadc-4bbd-9592-01e4892cd188","Type":"ContainerDied","Data":"ad43ee4a2303b73911b66bf42265bdee77ccc1a3171c889e29b40f384bc7c8e7"} Jan 30 08:19:09 crc kubenswrapper[4520]: I0130 08:19:09.006940 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6ngw" event={"ID":"d521314e-fadc-4bbd-9592-01e4892cd188","Type":"ContainerStarted","Data":"257f037416c59958d4a72020f07ceb880c5d055ecf35a219839b5bce31099929"} Jan 30 08:19:09 crc kubenswrapper[4520]: I0130 08:19:09.102213 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h6ngw" podStartSLOduration=1.570379285 podStartE2EDuration="4.102193647s" podCreationTimestamp="2026-01-30 08:19:05 +0000 UTC" firstStartedPulling="2026-01-30 08:19:05.980063311 +0000 UTC m=+5659.608415491" lastFinishedPulling="2026-01-30 08:19:08.511877671 +0000 UTC m=+5662.140229853" observedRunningTime="2026-01-30 08:19:09.099834601 +0000 UTC m=+5662.728186782" watchObservedRunningTime="2026-01-30 08:19:09.102193647 +0000 UTC m=+5662.730545828" Jan 30 08:19:15 crc kubenswrapper[4520]: I0130 08:19:15.359229 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h6ngw" Jan 30 08:19:15 crc kubenswrapper[4520]: I0130 08:19:15.360155 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h6ngw" Jan 30 08:19:15 crc kubenswrapper[4520]: I0130 08:19:15.401946 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h6ngw" Jan 30 08:19:16 crc kubenswrapper[4520]: I0130 08:19:16.102539 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h6ngw" Jan 30 08:19:16 crc kubenswrapper[4520]: I0130 08:19:16.147353 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h6ngw"] Jan 30 08:19:18 crc kubenswrapper[4520]: I0130 08:19:18.079876 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h6ngw" podUID="d521314e-fadc-4bbd-9592-01e4892cd188" containerName="registry-server" containerID="cri-o://257f037416c59958d4a72020f07ceb880c5d055ecf35a219839b5bce31099929" gracePeriod=2 Jan 30 08:19:18 crc kubenswrapper[4520]: I0130 08:19:18.549125 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h6ngw" Jan 30 08:19:18 crc kubenswrapper[4520]: I0130 08:19:18.625061 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdpzn\" (UniqueName: \"kubernetes.io/projected/d521314e-fadc-4bbd-9592-01e4892cd188-kube-api-access-rdpzn\") pod \"d521314e-fadc-4bbd-9592-01e4892cd188\" (UID: \"d521314e-fadc-4bbd-9592-01e4892cd188\") " Jan 30 08:19:18 crc kubenswrapper[4520]: I0130 08:19:18.625417 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d521314e-fadc-4bbd-9592-01e4892cd188-utilities\") pod \"d521314e-fadc-4bbd-9592-01e4892cd188\" (UID: \"d521314e-fadc-4bbd-9592-01e4892cd188\") " Jan 30 08:19:18 crc kubenswrapper[4520]: I0130 08:19:18.625506 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d521314e-fadc-4bbd-9592-01e4892cd188-catalog-content\") pod \"d521314e-fadc-4bbd-9592-01e4892cd188\" (UID: \"d521314e-fadc-4bbd-9592-01e4892cd188\") " Jan 30 08:19:18 crc kubenswrapper[4520]: I0130 08:19:18.629997 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d521314e-fadc-4bbd-9592-01e4892cd188-utilities" (OuterVolumeSpecName: "utilities") pod "d521314e-fadc-4bbd-9592-01e4892cd188" (UID: "d521314e-fadc-4bbd-9592-01e4892cd188"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:19:18 crc kubenswrapper[4520]: I0130 08:19:18.636120 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d521314e-fadc-4bbd-9592-01e4892cd188-kube-api-access-rdpzn" (OuterVolumeSpecName: "kube-api-access-rdpzn") pod "d521314e-fadc-4bbd-9592-01e4892cd188" (UID: "d521314e-fadc-4bbd-9592-01e4892cd188"). InnerVolumeSpecName "kube-api-access-rdpzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:19:18 crc kubenswrapper[4520]: I0130 08:19:18.693150 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d521314e-fadc-4bbd-9592-01e4892cd188-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d521314e-fadc-4bbd-9592-01e4892cd188" (UID: "d521314e-fadc-4bbd-9592-01e4892cd188"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:19:18 crc kubenswrapper[4520]: I0130 08:19:18.728736 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d521314e-fadc-4bbd-9592-01e4892cd188-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:19:18 crc kubenswrapper[4520]: I0130 08:19:18.728767 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d521314e-fadc-4bbd-9592-01e4892cd188-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:19:18 crc kubenswrapper[4520]: I0130 08:19:18.728780 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdpzn\" (UniqueName: \"kubernetes.io/projected/d521314e-fadc-4bbd-9592-01e4892cd188-kube-api-access-rdpzn\") on node \"crc\" DevicePath \"\"" Jan 30 08:19:19 crc kubenswrapper[4520]: I0130 08:19:19.091026 4520 generic.go:334] "Generic (PLEG): container finished" podID="d521314e-fadc-4bbd-9592-01e4892cd188" containerID="257f037416c59958d4a72020f07ceb880c5d055ecf35a219839b5bce31099929" exitCode=0 Jan 30 08:19:19 crc kubenswrapper[4520]: I0130 08:19:19.091100 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6ngw" event={"ID":"d521314e-fadc-4bbd-9592-01e4892cd188","Type":"ContainerDied","Data":"257f037416c59958d4a72020f07ceb880c5d055ecf35a219839b5bce31099929"} Jan 30 08:19:19 crc kubenswrapper[4520]: I0130 08:19:19.091134 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h6ngw" Jan 30 08:19:19 crc kubenswrapper[4520]: I0130 08:19:19.091156 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6ngw" event={"ID":"d521314e-fadc-4bbd-9592-01e4892cd188","Type":"ContainerDied","Data":"34dd8222f26371dadf6435c74a0497d510f11ea9b37a8025f09b6617455f6e47"} Jan 30 08:19:19 crc kubenswrapper[4520]: I0130 08:19:19.091186 4520 scope.go:117] "RemoveContainer" containerID="257f037416c59958d4a72020f07ceb880c5d055ecf35a219839b5bce31099929" Jan 30 08:19:19 crc kubenswrapper[4520]: I0130 08:19:19.114642 4520 scope.go:117] "RemoveContainer" containerID="ad43ee4a2303b73911b66bf42265bdee77ccc1a3171c889e29b40f384bc7c8e7" Jan 30 08:19:19 crc kubenswrapper[4520]: I0130 08:19:19.114998 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h6ngw"] Jan 30 08:19:19 crc kubenswrapper[4520]: I0130 08:19:19.128535 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h6ngw"] Jan 30 08:19:19 crc kubenswrapper[4520]: I0130 08:19:19.139618 4520 scope.go:117] "RemoveContainer" containerID="9f3fc79e371e0fdc595ab5126b884c3ad941ea67af37b6642643fe8cae8d6443" Jan 30 08:19:19 crc kubenswrapper[4520]: I0130 08:19:19.165323 4520 scope.go:117] "RemoveContainer" containerID="257f037416c59958d4a72020f07ceb880c5d055ecf35a219839b5bce31099929" Jan 30 08:19:19 crc kubenswrapper[4520]: E0130 08:19:19.165754 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"257f037416c59958d4a72020f07ceb880c5d055ecf35a219839b5bce31099929\": container with ID starting with 257f037416c59958d4a72020f07ceb880c5d055ecf35a219839b5bce31099929 not found: ID does not exist" containerID="257f037416c59958d4a72020f07ceb880c5d055ecf35a219839b5bce31099929" Jan 30 08:19:19 crc kubenswrapper[4520]: I0130 08:19:19.165795 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"257f037416c59958d4a72020f07ceb880c5d055ecf35a219839b5bce31099929"} err="failed to get container status \"257f037416c59958d4a72020f07ceb880c5d055ecf35a219839b5bce31099929\": rpc error: code = NotFound desc = could not find container \"257f037416c59958d4a72020f07ceb880c5d055ecf35a219839b5bce31099929\": container with ID starting with 257f037416c59958d4a72020f07ceb880c5d055ecf35a219839b5bce31099929 not found: ID does not exist" Jan 30 08:19:19 crc kubenswrapper[4520]: I0130 08:19:19.165822 4520 scope.go:117] "RemoveContainer" containerID="ad43ee4a2303b73911b66bf42265bdee77ccc1a3171c889e29b40f384bc7c8e7" Jan 30 08:19:19 crc kubenswrapper[4520]: E0130 08:19:19.166121 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad43ee4a2303b73911b66bf42265bdee77ccc1a3171c889e29b40f384bc7c8e7\": container with ID starting with ad43ee4a2303b73911b66bf42265bdee77ccc1a3171c889e29b40f384bc7c8e7 not found: ID does not exist" containerID="ad43ee4a2303b73911b66bf42265bdee77ccc1a3171c889e29b40f384bc7c8e7" Jan 30 08:19:19 crc kubenswrapper[4520]: I0130 08:19:19.166151 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad43ee4a2303b73911b66bf42265bdee77ccc1a3171c889e29b40f384bc7c8e7"} err="failed to get container status \"ad43ee4a2303b73911b66bf42265bdee77ccc1a3171c889e29b40f384bc7c8e7\": rpc error: code = NotFound desc = could not find container \"ad43ee4a2303b73911b66bf42265bdee77ccc1a3171c889e29b40f384bc7c8e7\": container with ID starting with ad43ee4a2303b73911b66bf42265bdee77ccc1a3171c889e29b40f384bc7c8e7 not found: ID does not exist" Jan 30 08:19:19 crc kubenswrapper[4520]: I0130 08:19:19.166173 4520 scope.go:117] "RemoveContainer" containerID="9f3fc79e371e0fdc595ab5126b884c3ad941ea67af37b6642643fe8cae8d6443" Jan 30 08:19:19 crc kubenswrapper[4520]: E0130 08:19:19.166486 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f3fc79e371e0fdc595ab5126b884c3ad941ea67af37b6642643fe8cae8d6443\": container with ID starting with 9f3fc79e371e0fdc595ab5126b884c3ad941ea67af37b6642643fe8cae8d6443 not found: ID does not exist" containerID="9f3fc79e371e0fdc595ab5126b884c3ad941ea67af37b6642643fe8cae8d6443" Jan 30 08:19:19 crc kubenswrapper[4520]: I0130 08:19:19.166587 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f3fc79e371e0fdc595ab5126b884c3ad941ea67af37b6642643fe8cae8d6443"} err="failed to get container status \"9f3fc79e371e0fdc595ab5126b884c3ad941ea67af37b6642643fe8cae8d6443\": rpc error: code = NotFound desc = could not find container \"9f3fc79e371e0fdc595ab5126b884c3ad941ea67af37b6642643fe8cae8d6443\": container with ID starting with 9f3fc79e371e0fdc595ab5126b884c3ad941ea67af37b6642643fe8cae8d6443 not found: ID does not exist" Jan 30 08:19:20 crc kubenswrapper[4520]: I0130 08:19:20.695310 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d521314e-fadc-4bbd-9592-01e4892cd188" path="/var/lib/kubelet/pods/d521314e-fadc-4bbd-9592-01e4892cd188/volumes" Jan 30 08:20:27 crc kubenswrapper[4520]: I0130 08:20:27.793243 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:20:27 crc kubenswrapper[4520]: I0130 08:20:27.793813 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:20:48 crc kubenswrapper[4520]: I0130 08:20:48.969155 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5d88db9f9c-l4v9v"] Jan 30 08:20:48 crc kubenswrapper[4520]: E0130 08:20:48.970119 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d521314e-fadc-4bbd-9592-01e4892cd188" containerName="extract-content" Jan 30 08:20:48 crc kubenswrapper[4520]: I0130 08:20:48.970136 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="d521314e-fadc-4bbd-9592-01e4892cd188" containerName="extract-content" Jan 30 08:20:48 crc kubenswrapper[4520]: E0130 08:20:48.970172 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d521314e-fadc-4bbd-9592-01e4892cd188" containerName="registry-server" Jan 30 08:20:48 crc kubenswrapper[4520]: I0130 08:20:48.970179 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="d521314e-fadc-4bbd-9592-01e4892cd188" containerName="registry-server" Jan 30 08:20:48 crc kubenswrapper[4520]: E0130 08:20:48.970202 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d521314e-fadc-4bbd-9592-01e4892cd188" containerName="extract-utilities" Jan 30 08:20:48 crc kubenswrapper[4520]: I0130 08:20:48.970208 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="d521314e-fadc-4bbd-9592-01e4892cd188" containerName="extract-utilities" Jan 30 08:20:48 crc kubenswrapper[4520]: I0130 08:20:48.970404 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="d521314e-fadc-4bbd-9592-01e4892cd188" containerName="registry-server" Jan 30 08:20:48 crc kubenswrapper[4520]: I0130 08:20:48.971593 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d88db9f9c-l4v9v" Jan 30 08:20:49 crc kubenswrapper[4520]: I0130 08:20:49.033318 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5d88db9f9c-l4v9v"] Jan 30 08:20:49 crc kubenswrapper[4520]: I0130 08:20:49.082172 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3d01d79e-48d0-46a4-8923-deb53fdb7ef5-httpd-config\") pod \"neutron-5d88db9f9c-l4v9v\" (UID: \"3d01d79e-48d0-46a4-8923-deb53fdb7ef5\") " pod="openstack/neutron-5d88db9f9c-l4v9v" Jan 30 08:20:49 crc kubenswrapper[4520]: I0130 08:20:49.082283 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d01d79e-48d0-46a4-8923-deb53fdb7ef5-public-tls-certs\") pod \"neutron-5d88db9f9c-l4v9v\" (UID: \"3d01d79e-48d0-46a4-8923-deb53fdb7ef5\") " pod="openstack/neutron-5d88db9f9c-l4v9v" Jan 30 08:20:49 crc kubenswrapper[4520]: I0130 08:20:49.082309 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d01d79e-48d0-46a4-8923-deb53fdb7ef5-internal-tls-certs\") pod \"neutron-5d88db9f9c-l4v9v\" (UID: \"3d01d79e-48d0-46a4-8923-deb53fdb7ef5\") " pod="openstack/neutron-5d88db9f9c-l4v9v" Jan 30 08:20:49 crc kubenswrapper[4520]: I0130 08:20:49.082361 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d01d79e-48d0-46a4-8923-deb53fdb7ef5-ovndb-tls-certs\") pod \"neutron-5d88db9f9c-l4v9v\" (UID: \"3d01d79e-48d0-46a4-8923-deb53fdb7ef5\") " pod="openstack/neutron-5d88db9f9c-l4v9v" Jan 30 08:20:49 crc kubenswrapper[4520]: I0130 08:20:49.082378 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d01d79e-48d0-46a4-8923-deb53fdb7ef5-config\") pod \"neutron-5d88db9f9c-l4v9v\" (UID: \"3d01d79e-48d0-46a4-8923-deb53fdb7ef5\") " pod="openstack/neutron-5d88db9f9c-l4v9v" Jan 30 08:20:49 crc kubenswrapper[4520]: I0130 08:20:49.082656 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d01d79e-48d0-46a4-8923-deb53fdb7ef5-combined-ca-bundle\") pod \"neutron-5d88db9f9c-l4v9v\" (UID: \"3d01d79e-48d0-46a4-8923-deb53fdb7ef5\") " pod="openstack/neutron-5d88db9f9c-l4v9v" Jan 30 08:20:49 crc kubenswrapper[4520]: I0130 08:20:49.082731 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp2kb\" (UniqueName: \"kubernetes.io/projected/3d01d79e-48d0-46a4-8923-deb53fdb7ef5-kube-api-access-sp2kb\") pod \"neutron-5d88db9f9c-l4v9v\" (UID: \"3d01d79e-48d0-46a4-8923-deb53fdb7ef5\") " pod="openstack/neutron-5d88db9f9c-l4v9v" Jan 30 08:20:49 crc kubenswrapper[4520]: I0130 08:20:49.183801 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d01d79e-48d0-46a4-8923-deb53fdb7ef5-public-tls-certs\") pod \"neutron-5d88db9f9c-l4v9v\" (UID: \"3d01d79e-48d0-46a4-8923-deb53fdb7ef5\") " pod="openstack/neutron-5d88db9f9c-l4v9v" Jan 30 08:20:49 crc kubenswrapper[4520]: I0130 08:20:49.185012 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d01d79e-48d0-46a4-8923-deb53fdb7ef5-internal-tls-certs\") pod \"neutron-5d88db9f9c-l4v9v\" (UID: \"3d01d79e-48d0-46a4-8923-deb53fdb7ef5\") " pod="openstack/neutron-5d88db9f9c-l4v9v" Jan 30 08:20:49 crc kubenswrapper[4520]: I0130 08:20:49.185082 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d01d79e-48d0-46a4-8923-deb53fdb7ef5-ovndb-tls-certs\") pod \"neutron-5d88db9f9c-l4v9v\" (UID: \"3d01d79e-48d0-46a4-8923-deb53fdb7ef5\") " pod="openstack/neutron-5d88db9f9c-l4v9v" Jan 30 08:20:49 crc kubenswrapper[4520]: I0130 08:20:49.185111 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d01d79e-48d0-46a4-8923-deb53fdb7ef5-config\") pod \"neutron-5d88db9f9c-l4v9v\" (UID: \"3d01d79e-48d0-46a4-8923-deb53fdb7ef5\") " pod="openstack/neutron-5d88db9f9c-l4v9v" Jan 30 08:20:49 crc kubenswrapper[4520]: I0130 08:20:49.185183 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d01d79e-48d0-46a4-8923-deb53fdb7ef5-combined-ca-bundle\") pod \"neutron-5d88db9f9c-l4v9v\" (UID: \"3d01d79e-48d0-46a4-8923-deb53fdb7ef5\") " pod="openstack/neutron-5d88db9f9c-l4v9v" Jan 30 08:20:49 crc kubenswrapper[4520]: I0130 08:20:49.185226 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp2kb\" (UniqueName: \"kubernetes.io/projected/3d01d79e-48d0-46a4-8923-deb53fdb7ef5-kube-api-access-sp2kb\") pod \"neutron-5d88db9f9c-l4v9v\" (UID: \"3d01d79e-48d0-46a4-8923-deb53fdb7ef5\") " pod="openstack/neutron-5d88db9f9c-l4v9v" Jan 30 08:20:49 crc kubenswrapper[4520]: I0130 08:20:49.186305 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3d01d79e-48d0-46a4-8923-deb53fdb7ef5-httpd-config\") pod \"neutron-5d88db9f9c-l4v9v\" (UID: \"3d01d79e-48d0-46a4-8923-deb53fdb7ef5\") " pod="openstack/neutron-5d88db9f9c-l4v9v" Jan 30 08:20:49 crc kubenswrapper[4520]: I0130 08:20:49.194253 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3d01d79e-48d0-46a4-8923-deb53fdb7ef5-httpd-config\") pod \"neutron-5d88db9f9c-l4v9v\" (UID: \"3d01d79e-48d0-46a4-8923-deb53fdb7ef5\") " pod="openstack/neutron-5d88db9f9c-l4v9v" Jan 30 08:20:49 crc kubenswrapper[4520]: I0130 08:20:49.195035 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d01d79e-48d0-46a4-8923-deb53fdb7ef5-public-tls-certs\") pod \"neutron-5d88db9f9c-l4v9v\" (UID: \"3d01d79e-48d0-46a4-8923-deb53fdb7ef5\") " pod="openstack/neutron-5d88db9f9c-l4v9v" Jan 30 08:20:49 crc kubenswrapper[4520]: I0130 08:20:49.197048 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d01d79e-48d0-46a4-8923-deb53fdb7ef5-combined-ca-bundle\") pod \"neutron-5d88db9f9c-l4v9v\" (UID: \"3d01d79e-48d0-46a4-8923-deb53fdb7ef5\") " pod="openstack/neutron-5d88db9f9c-l4v9v" Jan 30 08:20:49 crc kubenswrapper[4520]: I0130 08:20:49.219213 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d01d79e-48d0-46a4-8923-deb53fdb7ef5-internal-tls-certs\") pod \"neutron-5d88db9f9c-l4v9v\" (UID: \"3d01d79e-48d0-46a4-8923-deb53fdb7ef5\") " pod="openstack/neutron-5d88db9f9c-l4v9v" Jan 30 08:20:49 crc kubenswrapper[4520]: I0130 08:20:49.219751 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d01d79e-48d0-46a4-8923-deb53fdb7ef5-ovndb-tls-certs\") pod \"neutron-5d88db9f9c-l4v9v\" (UID: \"3d01d79e-48d0-46a4-8923-deb53fdb7ef5\") " pod="openstack/neutron-5d88db9f9c-l4v9v" Jan 30 08:20:49 crc kubenswrapper[4520]: I0130 08:20:49.221724 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp2kb\" (UniqueName: \"kubernetes.io/projected/3d01d79e-48d0-46a4-8923-deb53fdb7ef5-kube-api-access-sp2kb\") pod \"neutron-5d88db9f9c-l4v9v\" (UID: \"3d01d79e-48d0-46a4-8923-deb53fdb7ef5\") " pod="openstack/neutron-5d88db9f9c-l4v9v" Jan 30 08:20:49 crc kubenswrapper[4520]: I0130 08:20:49.231595 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d01d79e-48d0-46a4-8923-deb53fdb7ef5-config\") pod \"neutron-5d88db9f9c-l4v9v\" (UID: \"3d01d79e-48d0-46a4-8923-deb53fdb7ef5\") " pod="openstack/neutron-5d88db9f9c-l4v9v" Jan 30 08:20:49 crc kubenswrapper[4520]: I0130 08:20:49.293388 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d88db9f9c-l4v9v" Jan 30 08:20:50 crc kubenswrapper[4520]: W0130 08:20:50.154127 4520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d01d79e_48d0_46a4_8923_deb53fdb7ef5.slice/crio-34fb50838a0ff45c30bfe6576c7267d698f04fbc4935aec8617a8d1350abe498 WatchSource:0}: Error finding container 34fb50838a0ff45c30bfe6576c7267d698f04fbc4935aec8617a8d1350abe498: Status 404 returned error can't find the container with id 34fb50838a0ff45c30bfe6576c7267d698f04fbc4935aec8617a8d1350abe498 Jan 30 08:20:50 crc kubenswrapper[4520]: I0130 08:20:50.156944 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5d88db9f9c-l4v9v"] Jan 30 08:20:51 crc kubenswrapper[4520]: I0130 08:20:51.025463 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d88db9f9c-l4v9v" event={"ID":"3d01d79e-48d0-46a4-8923-deb53fdb7ef5","Type":"ContainerStarted","Data":"9fd10db59dc74a7bfcb4a3245f6530426e42f28e33047519bcc7d9cc19fea7cd"} Jan 30 08:20:51 crc kubenswrapper[4520]: I0130 08:20:51.025751 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d88db9f9c-l4v9v" event={"ID":"3d01d79e-48d0-46a4-8923-deb53fdb7ef5","Type":"ContainerStarted","Data":"470c5a01bf30a07ae71ea157bf7ef0f230492eff45da17e01c011ea4d8c1965b"} Jan 30 08:20:51 crc kubenswrapper[4520]: I0130 08:20:51.025763 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d88db9f9c-l4v9v" event={"ID":"3d01d79e-48d0-46a4-8923-deb53fdb7ef5","Type":"ContainerStarted","Data":"34fb50838a0ff45c30bfe6576c7267d698f04fbc4935aec8617a8d1350abe498"} Jan 30 08:20:51 crc kubenswrapper[4520]: I0130 08:20:51.026868 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5d88db9f9c-l4v9v" Jan 30 08:20:51 crc kubenswrapper[4520]: I0130 08:20:51.047609 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5d88db9f9c-l4v9v" podStartSLOduration=3.047592567 podStartE2EDuration="3.047592567s" podCreationTimestamp="2026-01-30 08:20:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:20:51.043455108 +0000 UTC m=+5764.671807289" watchObservedRunningTime="2026-01-30 08:20:51.047592567 +0000 UTC m=+5764.675944738" Jan 30 08:20:57 crc kubenswrapper[4520]: I0130 08:20:57.793854 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:20:57 crc kubenswrapper[4520]: I0130 08:20:57.794372 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:21:19 crc kubenswrapper[4520]: I0130 08:21:19.310419 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5d88db9f9c-l4v9v" Jan 30 08:21:19 crc kubenswrapper[4520]: I0130 08:21:19.373113 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-64d7b7f77f-brl5q"] Jan 30 08:21:19 crc kubenswrapper[4520]: I0130 08:21:19.373376 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-64d7b7f77f-brl5q" podUID="1dda99e9-b232-4721-b801-18c61513277a" containerName="neutron-api" containerID="cri-o://46e8b12be233a7a79ddf09fed3c56fa1d7e5335ecf5f7fbd68eb747ebfd0145e" gracePeriod=30 Jan 30 08:21:19 crc kubenswrapper[4520]: I0130 08:21:19.373489 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-64d7b7f77f-brl5q" podUID="1dda99e9-b232-4721-b801-18c61513277a" containerName="neutron-httpd" containerID="cri-o://bf64911203186a5f6d2cbf5275ea75c1912720659aca97a4fdb2edc32f399f6c" gracePeriod=30 Jan 30 08:21:21 crc kubenswrapper[4520]: I0130 08:21:21.333638 4520 generic.go:334] "Generic (PLEG): container finished" podID="1dda99e9-b232-4721-b801-18c61513277a" containerID="bf64911203186a5f6d2cbf5275ea75c1912720659aca97a4fdb2edc32f399f6c" exitCode=0 Jan 30 08:21:21 crc kubenswrapper[4520]: I0130 08:21:21.333852 4520 generic.go:334] "Generic (PLEG): container finished" podID="1dda99e9-b232-4721-b801-18c61513277a" containerID="46e8b12be233a7a79ddf09fed3c56fa1d7e5335ecf5f7fbd68eb747ebfd0145e" exitCode=0 Jan 30 08:21:21 crc kubenswrapper[4520]: I0130 08:21:21.333879 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64d7b7f77f-brl5q" event={"ID":"1dda99e9-b232-4721-b801-18c61513277a","Type":"ContainerDied","Data":"bf64911203186a5f6d2cbf5275ea75c1912720659aca97a4fdb2edc32f399f6c"} Jan 30 08:21:21 crc kubenswrapper[4520]: I0130 08:21:21.333919 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64d7b7f77f-brl5q" event={"ID":"1dda99e9-b232-4721-b801-18c61513277a","Type":"ContainerDied","Data":"46e8b12be233a7a79ddf09fed3c56fa1d7e5335ecf5f7fbd68eb747ebfd0145e"} Jan 30 08:21:21 crc kubenswrapper[4520]: I0130 08:21:21.404345 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64d7b7f77f-brl5q" Jan 30 08:21:21 crc kubenswrapper[4520]: I0130 08:21:21.501745 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dda99e9-b232-4721-b801-18c61513277a-ovndb-tls-certs\") pod \"1dda99e9-b232-4721-b801-18c61513277a\" (UID: \"1dda99e9-b232-4721-b801-18c61513277a\") " Jan 30 08:21:21 crc kubenswrapper[4520]: I0130 08:21:21.502085 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1dda99e9-b232-4721-b801-18c61513277a-config\") pod \"1dda99e9-b232-4721-b801-18c61513277a\" (UID: \"1dda99e9-b232-4721-b801-18c61513277a\") " Jan 30 08:21:21 crc kubenswrapper[4520]: I0130 08:21:21.502136 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dda99e9-b232-4721-b801-18c61513277a-combined-ca-bundle\") pod \"1dda99e9-b232-4721-b801-18c61513277a\" (UID: \"1dda99e9-b232-4721-b801-18c61513277a\") " Jan 30 08:21:21 crc kubenswrapper[4520]: I0130 08:21:21.502337 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1dda99e9-b232-4721-b801-18c61513277a-httpd-config\") pod \"1dda99e9-b232-4721-b801-18c61513277a\" (UID: \"1dda99e9-b232-4721-b801-18c61513277a\") " Jan 30 08:21:21 crc kubenswrapper[4520]: I0130 08:21:21.502369 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dda99e9-b232-4721-b801-18c61513277a-public-tls-certs\") pod \"1dda99e9-b232-4721-b801-18c61513277a\" (UID: \"1dda99e9-b232-4721-b801-18c61513277a\") " Jan 30 08:21:21 crc kubenswrapper[4520]: I0130 08:21:21.503057 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7sn4\" (UniqueName: \"kubernetes.io/projected/1dda99e9-b232-4721-b801-18c61513277a-kube-api-access-r7sn4\") pod \"1dda99e9-b232-4721-b801-18c61513277a\" (UID: \"1dda99e9-b232-4721-b801-18c61513277a\") " Jan 30 08:21:21 crc kubenswrapper[4520]: I0130 08:21:21.503120 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dda99e9-b232-4721-b801-18c61513277a-internal-tls-certs\") pod \"1dda99e9-b232-4721-b801-18c61513277a\" (UID: \"1dda99e9-b232-4721-b801-18c61513277a\") " Jan 30 08:21:21 crc kubenswrapper[4520]: I0130 08:21:21.510670 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dda99e9-b232-4721-b801-18c61513277a-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "1dda99e9-b232-4721-b801-18c61513277a" (UID: "1dda99e9-b232-4721-b801-18c61513277a"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:21:21 crc kubenswrapper[4520]: I0130 08:21:21.515025 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dda99e9-b232-4721-b801-18c61513277a-kube-api-access-r7sn4" (OuterVolumeSpecName: "kube-api-access-r7sn4") pod "1dda99e9-b232-4721-b801-18c61513277a" (UID: "1dda99e9-b232-4721-b801-18c61513277a"). InnerVolumeSpecName "kube-api-access-r7sn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:21:21 crc kubenswrapper[4520]: I0130 08:21:21.556921 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dda99e9-b232-4721-b801-18c61513277a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1dda99e9-b232-4721-b801-18c61513277a" (UID: "1dda99e9-b232-4721-b801-18c61513277a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:21:21 crc kubenswrapper[4520]: I0130 08:21:21.558888 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dda99e9-b232-4721-b801-18c61513277a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1dda99e9-b232-4721-b801-18c61513277a" (UID: "1dda99e9-b232-4721-b801-18c61513277a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:21:21 crc kubenswrapper[4520]: I0130 08:21:21.564689 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dda99e9-b232-4721-b801-18c61513277a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1dda99e9-b232-4721-b801-18c61513277a" (UID: "1dda99e9-b232-4721-b801-18c61513277a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:21:21 crc kubenswrapper[4520]: I0130 08:21:21.575880 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dda99e9-b232-4721-b801-18c61513277a-config" (OuterVolumeSpecName: "config") pod "1dda99e9-b232-4721-b801-18c61513277a" (UID: "1dda99e9-b232-4721-b801-18c61513277a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:21:21 crc kubenswrapper[4520]: I0130 08:21:21.584146 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dda99e9-b232-4721-b801-18c61513277a-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "1dda99e9-b232-4721-b801-18c61513277a" (UID: "1dda99e9-b232-4721-b801-18c61513277a"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:21:21 crc kubenswrapper[4520]: I0130 08:21:21.608160 4520 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1dda99e9-b232-4721-b801-18c61513277a-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:21:21 crc kubenswrapper[4520]: I0130 08:21:21.608190 4520 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dda99e9-b232-4721-b801-18c61513277a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:21:21 crc kubenswrapper[4520]: I0130 08:21:21.608205 4520 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1dda99e9-b232-4721-b801-18c61513277a-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:21:21 crc kubenswrapper[4520]: I0130 08:21:21.608217 4520 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dda99e9-b232-4721-b801-18c61513277a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:21:21 crc kubenswrapper[4520]: I0130 08:21:21.608227 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7sn4\" (UniqueName: \"kubernetes.io/projected/1dda99e9-b232-4721-b801-18c61513277a-kube-api-access-r7sn4\") on node \"crc\" DevicePath \"\"" Jan 30 08:21:21 crc kubenswrapper[4520]: I0130 08:21:21.608237 4520 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dda99e9-b232-4721-b801-18c61513277a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:21:21 crc kubenswrapper[4520]: I0130 08:21:21.608246 4520 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dda99e9-b232-4721-b801-18c61513277a-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:21:22 crc kubenswrapper[4520]: I0130 08:21:22.347018 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64d7b7f77f-brl5q" event={"ID":"1dda99e9-b232-4721-b801-18c61513277a","Type":"ContainerDied","Data":"0b00cd6f29364a67255ac6e3e5919caa80b753f0abf32c9d323fe6f32e215105"} Jan 30 08:21:22 crc kubenswrapper[4520]: I0130 08:21:22.347080 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64d7b7f77f-brl5q" Jan 30 08:21:22 crc kubenswrapper[4520]: I0130 08:21:22.347134 4520 scope.go:117] "RemoveContainer" containerID="bf64911203186a5f6d2cbf5275ea75c1912720659aca97a4fdb2edc32f399f6c" Jan 30 08:21:22 crc kubenswrapper[4520]: I0130 08:21:22.385764 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-64d7b7f77f-brl5q"] Jan 30 08:21:22 crc kubenswrapper[4520]: I0130 08:21:22.393870 4520 scope.go:117] "RemoveContainer" containerID="46e8b12be233a7a79ddf09fed3c56fa1d7e5335ecf5f7fbd68eb747ebfd0145e" Jan 30 08:21:22 crc kubenswrapper[4520]: I0130 08:21:22.397179 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-64d7b7f77f-brl5q"] Jan 30 08:21:22 crc kubenswrapper[4520]: I0130 08:21:22.700940 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dda99e9-b232-4721-b801-18c61513277a" path="/var/lib/kubelet/pods/1dda99e9-b232-4721-b801-18c61513277a/volumes" Jan 30 08:21:27 crc kubenswrapper[4520]: I0130 08:21:27.793563 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:21:27 crc kubenswrapper[4520]: I0130 08:21:27.794923 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:21:27 crc kubenswrapper[4520]: I0130 08:21:27.795006 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" Jan 30 08:21:27 crc kubenswrapper[4520]: I0130 08:21:27.795834 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a144a84e67748e984e28c7ea0d782f445f9dab8d6b3c1d91475dbed80ad97761"} pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 08:21:27 crc kubenswrapper[4520]: I0130 08:21:27.795910 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" containerID="cri-o://a144a84e67748e984e28c7ea0d782f445f9dab8d6b3c1d91475dbed80ad97761" gracePeriod=600 Jan 30 08:21:28 crc kubenswrapper[4520]: I0130 08:21:28.410610 4520 generic.go:334] "Generic (PLEG): container finished" podID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerID="a144a84e67748e984e28c7ea0d782f445f9dab8d6b3c1d91475dbed80ad97761" exitCode=0 Jan 30 08:21:28 crc kubenswrapper[4520]: I0130 08:21:28.410676 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerDied","Data":"a144a84e67748e984e28c7ea0d782f445f9dab8d6b3c1d91475dbed80ad97761"} Jan 30 08:21:28 crc kubenswrapper[4520]: I0130 08:21:28.410989 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerStarted","Data":"101a296d02a9bb848d5434571ec27275f07f863919cf3402452fde90fa8f2104"} Jan 30 08:21:28 crc kubenswrapper[4520]: I0130 08:21:28.411012 4520 scope.go:117] "RemoveContainer" containerID="5bf99a70e835280e041759c379d0b5c1d28d20267306cf6c29f1e0b2bb51fcbb" Jan 30 08:22:52 crc kubenswrapper[4520]: I0130 08:22:52.945716 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k5bx2"] Jan 30 08:22:52 crc kubenswrapper[4520]: E0130 08:22:52.946724 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dda99e9-b232-4721-b801-18c61513277a" containerName="neutron-api" Jan 30 08:22:52 crc kubenswrapper[4520]: I0130 08:22:52.946740 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dda99e9-b232-4721-b801-18c61513277a" containerName="neutron-api" Jan 30 08:22:52 crc kubenswrapper[4520]: E0130 08:22:52.946777 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dda99e9-b232-4721-b801-18c61513277a" containerName="neutron-httpd" Jan 30 08:22:52 crc kubenswrapper[4520]: I0130 08:22:52.946784 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dda99e9-b232-4721-b801-18c61513277a" containerName="neutron-httpd" Jan 30 08:22:52 crc kubenswrapper[4520]: I0130 08:22:52.946982 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dda99e9-b232-4721-b801-18c61513277a" containerName="neutron-httpd" Jan 30 08:22:52 crc kubenswrapper[4520]: I0130 08:22:52.946994 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dda99e9-b232-4721-b801-18c61513277a" containerName="neutron-api" Jan 30 08:22:52 crc kubenswrapper[4520]: I0130 08:22:52.948436 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k5bx2" Jan 30 08:22:52 crc kubenswrapper[4520]: I0130 08:22:52.954937 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k5bx2"] Jan 30 08:22:52 crc kubenswrapper[4520]: I0130 08:22:52.991181 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxg4c\" (UniqueName: \"kubernetes.io/projected/66ab083e-b2e1-4f06-a5f1-fa26de1442b0-kube-api-access-nxg4c\") pod \"certified-operators-k5bx2\" (UID: \"66ab083e-b2e1-4f06-a5f1-fa26de1442b0\") " pod="openshift-marketplace/certified-operators-k5bx2" Jan 30 08:22:52 crc kubenswrapper[4520]: I0130 08:22:52.991367 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66ab083e-b2e1-4f06-a5f1-fa26de1442b0-catalog-content\") pod \"certified-operators-k5bx2\" (UID: \"66ab083e-b2e1-4f06-a5f1-fa26de1442b0\") " pod="openshift-marketplace/certified-operators-k5bx2" Jan 30 08:22:52 crc kubenswrapper[4520]: I0130 08:22:52.991437 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66ab083e-b2e1-4f06-a5f1-fa26de1442b0-utilities\") pod \"certified-operators-k5bx2\" (UID: \"66ab083e-b2e1-4f06-a5f1-fa26de1442b0\") " pod="openshift-marketplace/certified-operators-k5bx2" Jan 30 08:22:53 crc kubenswrapper[4520]: I0130 08:22:53.094036 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66ab083e-b2e1-4f06-a5f1-fa26de1442b0-catalog-content\") pod \"certified-operators-k5bx2\" (UID: \"66ab083e-b2e1-4f06-a5f1-fa26de1442b0\") " pod="openshift-marketplace/certified-operators-k5bx2" Jan 30 08:22:53 crc kubenswrapper[4520]: I0130 08:22:53.094136 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66ab083e-b2e1-4f06-a5f1-fa26de1442b0-utilities\") pod \"certified-operators-k5bx2\" (UID: \"66ab083e-b2e1-4f06-a5f1-fa26de1442b0\") " pod="openshift-marketplace/certified-operators-k5bx2" Jan 30 08:22:53 crc kubenswrapper[4520]: I0130 08:22:53.094378 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxg4c\" (UniqueName: \"kubernetes.io/projected/66ab083e-b2e1-4f06-a5f1-fa26de1442b0-kube-api-access-nxg4c\") pod \"certified-operators-k5bx2\" (UID: \"66ab083e-b2e1-4f06-a5f1-fa26de1442b0\") " pod="openshift-marketplace/certified-operators-k5bx2" Jan 30 08:22:53 crc kubenswrapper[4520]: I0130 08:22:53.094617 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66ab083e-b2e1-4f06-a5f1-fa26de1442b0-catalog-content\") pod \"certified-operators-k5bx2\" (UID: \"66ab083e-b2e1-4f06-a5f1-fa26de1442b0\") " pod="openshift-marketplace/certified-operators-k5bx2" Jan 30 08:22:53 crc kubenswrapper[4520]: I0130 08:22:53.094844 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66ab083e-b2e1-4f06-a5f1-fa26de1442b0-utilities\") pod \"certified-operators-k5bx2\" (UID: \"66ab083e-b2e1-4f06-a5f1-fa26de1442b0\") " pod="openshift-marketplace/certified-operators-k5bx2" Jan 30 08:22:53 crc kubenswrapper[4520]: I0130 08:22:53.113731 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxg4c\" (UniqueName: \"kubernetes.io/projected/66ab083e-b2e1-4f06-a5f1-fa26de1442b0-kube-api-access-nxg4c\") pod \"certified-operators-k5bx2\" (UID: \"66ab083e-b2e1-4f06-a5f1-fa26de1442b0\") " pod="openshift-marketplace/certified-operators-k5bx2" Jan 30 08:22:53 crc kubenswrapper[4520]: I0130 08:22:53.271773 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k5bx2" Jan 30 08:22:53 crc kubenswrapper[4520]: I0130 08:22:53.757883 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k5bx2"] Jan 30 08:22:54 crc kubenswrapper[4520]: I0130 08:22:54.210350 4520 generic.go:334] "Generic (PLEG): container finished" podID="66ab083e-b2e1-4f06-a5f1-fa26de1442b0" containerID="048e5d2b8623c8bad2b93a04046b63e79ef56615fcd803fe402612a14c22da93" exitCode=0 Jan 30 08:22:54 crc kubenswrapper[4520]: I0130 08:22:54.210479 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k5bx2" event={"ID":"66ab083e-b2e1-4f06-a5f1-fa26de1442b0","Type":"ContainerDied","Data":"048e5d2b8623c8bad2b93a04046b63e79ef56615fcd803fe402612a14c22da93"} Jan 30 08:22:54 crc kubenswrapper[4520]: I0130 08:22:54.210718 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k5bx2" event={"ID":"66ab083e-b2e1-4f06-a5f1-fa26de1442b0","Type":"ContainerStarted","Data":"e5b48b5d2224d8babab25809d1abc30200617ae45f68e48e381814365bdff093"} Jan 30 08:22:54 crc kubenswrapper[4520]: I0130 08:22:54.213726 4520 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 08:22:55 crc kubenswrapper[4520]: I0130 08:22:55.221039 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k5bx2" event={"ID":"66ab083e-b2e1-4f06-a5f1-fa26de1442b0","Type":"ContainerStarted","Data":"1ab6ed06991b229b5a3f5d87a930d701b8533679edaaf184fa21fa362c9ea48f"} Jan 30 08:22:56 crc kubenswrapper[4520]: I0130 08:22:56.234050 4520 generic.go:334] "Generic (PLEG): container finished" podID="66ab083e-b2e1-4f06-a5f1-fa26de1442b0" containerID="1ab6ed06991b229b5a3f5d87a930d701b8533679edaaf184fa21fa362c9ea48f" exitCode=0 Jan 30 08:22:56 crc kubenswrapper[4520]: I0130 08:22:56.234186 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k5bx2" event={"ID":"66ab083e-b2e1-4f06-a5f1-fa26de1442b0","Type":"ContainerDied","Data":"1ab6ed06991b229b5a3f5d87a930d701b8533679edaaf184fa21fa362c9ea48f"} Jan 30 08:22:57 crc kubenswrapper[4520]: I0130 08:22:57.250493 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k5bx2" event={"ID":"66ab083e-b2e1-4f06-a5f1-fa26de1442b0","Type":"ContainerStarted","Data":"76a5eb177c69121401693693c65712f0e01a2aca43d969e05dcdc5fc1ec9d3c8"} Jan 30 08:22:57 crc kubenswrapper[4520]: I0130 08:22:57.269178 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k5bx2" podStartSLOduration=2.7851879889999998 podStartE2EDuration="5.269158905s" podCreationTimestamp="2026-01-30 08:22:52 +0000 UTC" firstStartedPulling="2026-01-30 08:22:54.212699474 +0000 UTC m=+5887.841051655" lastFinishedPulling="2026-01-30 08:22:56.69667039 +0000 UTC m=+5890.325022571" observedRunningTime="2026-01-30 08:22:57.267576049 +0000 UTC m=+5890.895928231" watchObservedRunningTime="2026-01-30 08:22:57.269158905 +0000 UTC m=+5890.897511087" Jan 30 08:23:03 crc kubenswrapper[4520]: I0130 08:23:03.272871 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k5bx2" Jan 30 08:23:03 crc kubenswrapper[4520]: I0130 08:23:03.273465 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k5bx2" Jan 30 08:23:03 crc kubenswrapper[4520]: I0130 08:23:03.318958 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k5bx2" Jan 30 08:23:03 crc kubenswrapper[4520]: I0130 08:23:03.364804 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k5bx2" Jan 30 08:23:03 crc kubenswrapper[4520]: I0130 08:23:03.553328 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k5bx2"] Jan 30 08:23:05 crc kubenswrapper[4520]: I0130 08:23:05.321951 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k5bx2" podUID="66ab083e-b2e1-4f06-a5f1-fa26de1442b0" containerName="registry-server" containerID="cri-o://76a5eb177c69121401693693c65712f0e01a2aca43d969e05dcdc5fc1ec9d3c8" gracePeriod=2 Jan 30 08:23:05 crc kubenswrapper[4520]: I0130 08:23:05.889799 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k5bx2" Jan 30 08:23:06 crc kubenswrapper[4520]: I0130 08:23:06.002721 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66ab083e-b2e1-4f06-a5f1-fa26de1442b0-utilities\") pod \"66ab083e-b2e1-4f06-a5f1-fa26de1442b0\" (UID: \"66ab083e-b2e1-4f06-a5f1-fa26de1442b0\") " Jan 30 08:23:06 crc kubenswrapper[4520]: I0130 08:23:06.002856 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxg4c\" (UniqueName: \"kubernetes.io/projected/66ab083e-b2e1-4f06-a5f1-fa26de1442b0-kube-api-access-nxg4c\") pod \"66ab083e-b2e1-4f06-a5f1-fa26de1442b0\" (UID: \"66ab083e-b2e1-4f06-a5f1-fa26de1442b0\") " Jan 30 08:23:06 crc kubenswrapper[4520]: I0130 08:23:06.002943 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66ab083e-b2e1-4f06-a5f1-fa26de1442b0-catalog-content\") pod \"66ab083e-b2e1-4f06-a5f1-fa26de1442b0\" (UID: \"66ab083e-b2e1-4f06-a5f1-fa26de1442b0\") " Jan 30 08:23:06 crc kubenswrapper[4520]: I0130 08:23:06.003718 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66ab083e-b2e1-4f06-a5f1-fa26de1442b0-utilities" (OuterVolumeSpecName: "utilities") pod "66ab083e-b2e1-4f06-a5f1-fa26de1442b0" (UID: "66ab083e-b2e1-4f06-a5f1-fa26de1442b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:23:06 crc kubenswrapper[4520]: I0130 08:23:06.004019 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66ab083e-b2e1-4f06-a5f1-fa26de1442b0-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:23:06 crc kubenswrapper[4520]: I0130 08:23:06.015822 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66ab083e-b2e1-4f06-a5f1-fa26de1442b0-kube-api-access-nxg4c" (OuterVolumeSpecName: "kube-api-access-nxg4c") pod "66ab083e-b2e1-4f06-a5f1-fa26de1442b0" (UID: "66ab083e-b2e1-4f06-a5f1-fa26de1442b0"). InnerVolumeSpecName "kube-api-access-nxg4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:23:06 crc kubenswrapper[4520]: I0130 08:23:06.051543 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66ab083e-b2e1-4f06-a5f1-fa26de1442b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66ab083e-b2e1-4f06-a5f1-fa26de1442b0" (UID: "66ab083e-b2e1-4f06-a5f1-fa26de1442b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:23:06 crc kubenswrapper[4520]: I0130 08:23:06.107821 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxg4c\" (UniqueName: \"kubernetes.io/projected/66ab083e-b2e1-4f06-a5f1-fa26de1442b0-kube-api-access-nxg4c\") on node \"crc\" DevicePath \"\"" Jan 30 08:23:06 crc kubenswrapper[4520]: I0130 08:23:06.107858 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66ab083e-b2e1-4f06-a5f1-fa26de1442b0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:23:06 crc kubenswrapper[4520]: I0130 08:23:06.346571 4520 generic.go:334] "Generic (PLEG): container finished" podID="66ab083e-b2e1-4f06-a5f1-fa26de1442b0" containerID="76a5eb177c69121401693693c65712f0e01a2aca43d969e05dcdc5fc1ec9d3c8" exitCode=0 Jan 30 08:23:06 crc kubenswrapper[4520]: I0130 08:23:06.346648 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k5bx2" event={"ID":"66ab083e-b2e1-4f06-a5f1-fa26de1442b0","Type":"ContainerDied","Data":"76a5eb177c69121401693693c65712f0e01a2aca43d969e05dcdc5fc1ec9d3c8"} Jan 30 08:23:06 crc kubenswrapper[4520]: I0130 08:23:06.346684 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k5bx2" Jan 30 08:23:06 crc kubenswrapper[4520]: I0130 08:23:06.346714 4520 scope.go:117] "RemoveContainer" containerID="76a5eb177c69121401693693c65712f0e01a2aca43d969e05dcdc5fc1ec9d3c8" Jan 30 08:23:06 crc kubenswrapper[4520]: I0130 08:23:06.346693 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k5bx2" event={"ID":"66ab083e-b2e1-4f06-a5f1-fa26de1442b0","Type":"ContainerDied","Data":"e5b48b5d2224d8babab25809d1abc30200617ae45f68e48e381814365bdff093"} Jan 30 08:23:06 crc kubenswrapper[4520]: I0130 08:23:06.386165 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k5bx2"] Jan 30 08:23:06 crc kubenswrapper[4520]: I0130 08:23:06.387641 4520 scope.go:117] "RemoveContainer" containerID="1ab6ed06991b229b5a3f5d87a930d701b8533679edaaf184fa21fa362c9ea48f" Jan 30 08:23:06 crc kubenswrapper[4520]: I0130 08:23:06.392102 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k5bx2"] Jan 30 08:23:06 crc kubenswrapper[4520]: I0130 08:23:06.416459 4520 scope.go:117] "RemoveContainer" containerID="048e5d2b8623c8bad2b93a04046b63e79ef56615fcd803fe402612a14c22da93" Jan 30 08:23:06 crc kubenswrapper[4520]: I0130 08:23:06.446543 4520 scope.go:117] "RemoveContainer" containerID="76a5eb177c69121401693693c65712f0e01a2aca43d969e05dcdc5fc1ec9d3c8" Jan 30 08:23:06 crc kubenswrapper[4520]: E0130 08:23:06.447767 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76a5eb177c69121401693693c65712f0e01a2aca43d969e05dcdc5fc1ec9d3c8\": container with ID starting with 76a5eb177c69121401693693c65712f0e01a2aca43d969e05dcdc5fc1ec9d3c8 not found: ID does not exist" containerID="76a5eb177c69121401693693c65712f0e01a2aca43d969e05dcdc5fc1ec9d3c8" Jan 30 08:23:06 crc kubenswrapper[4520]: I0130 08:23:06.447826 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76a5eb177c69121401693693c65712f0e01a2aca43d969e05dcdc5fc1ec9d3c8"} err="failed to get container status \"76a5eb177c69121401693693c65712f0e01a2aca43d969e05dcdc5fc1ec9d3c8\": rpc error: code = NotFound desc = could not find container \"76a5eb177c69121401693693c65712f0e01a2aca43d969e05dcdc5fc1ec9d3c8\": container with ID starting with 76a5eb177c69121401693693c65712f0e01a2aca43d969e05dcdc5fc1ec9d3c8 not found: ID does not exist" Jan 30 08:23:06 crc kubenswrapper[4520]: I0130 08:23:06.447855 4520 scope.go:117] "RemoveContainer" containerID="1ab6ed06991b229b5a3f5d87a930d701b8533679edaaf184fa21fa362c9ea48f" Jan 30 08:23:06 crc kubenswrapper[4520]: E0130 08:23:06.448235 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ab6ed06991b229b5a3f5d87a930d701b8533679edaaf184fa21fa362c9ea48f\": container with ID starting with 1ab6ed06991b229b5a3f5d87a930d701b8533679edaaf184fa21fa362c9ea48f not found: ID does not exist" containerID="1ab6ed06991b229b5a3f5d87a930d701b8533679edaaf184fa21fa362c9ea48f" Jan 30 08:23:06 crc kubenswrapper[4520]: I0130 08:23:06.448263 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ab6ed06991b229b5a3f5d87a930d701b8533679edaaf184fa21fa362c9ea48f"} err="failed to get container status \"1ab6ed06991b229b5a3f5d87a930d701b8533679edaaf184fa21fa362c9ea48f\": rpc error: code = NotFound desc = could not find container \"1ab6ed06991b229b5a3f5d87a930d701b8533679edaaf184fa21fa362c9ea48f\": container with ID starting with 1ab6ed06991b229b5a3f5d87a930d701b8533679edaaf184fa21fa362c9ea48f not found: ID does not exist" Jan 30 08:23:06 crc kubenswrapper[4520]: I0130 08:23:06.448276 4520 scope.go:117] "RemoveContainer" containerID="048e5d2b8623c8bad2b93a04046b63e79ef56615fcd803fe402612a14c22da93" Jan 30 08:23:06 crc kubenswrapper[4520]: E0130 08:23:06.448925 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"048e5d2b8623c8bad2b93a04046b63e79ef56615fcd803fe402612a14c22da93\": container with ID starting with 048e5d2b8623c8bad2b93a04046b63e79ef56615fcd803fe402612a14c22da93 not found: ID does not exist" containerID="048e5d2b8623c8bad2b93a04046b63e79ef56615fcd803fe402612a14c22da93" Jan 30 08:23:06 crc kubenswrapper[4520]: I0130 08:23:06.448947 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"048e5d2b8623c8bad2b93a04046b63e79ef56615fcd803fe402612a14c22da93"} err="failed to get container status \"048e5d2b8623c8bad2b93a04046b63e79ef56615fcd803fe402612a14c22da93\": rpc error: code = NotFound desc = could not find container \"048e5d2b8623c8bad2b93a04046b63e79ef56615fcd803fe402612a14c22da93\": container with ID starting with 048e5d2b8623c8bad2b93a04046b63e79ef56615fcd803fe402612a14c22da93 not found: ID does not exist" Jan 30 08:23:06 crc kubenswrapper[4520]: I0130 08:23:06.698128 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66ab083e-b2e1-4f06-a5f1-fa26de1442b0" path="/var/lib/kubelet/pods/66ab083e-b2e1-4f06-a5f1-fa26de1442b0/volumes" Jan 30 08:23:57 crc kubenswrapper[4520]: I0130 08:23:57.793741 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:23:57 crc kubenswrapper[4520]: I0130 08:23:57.794581 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:24:27 crc kubenswrapper[4520]: I0130 08:24:27.793853 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:24:27 crc kubenswrapper[4520]: I0130 08:24:27.794404 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:24:57 crc kubenswrapper[4520]: I0130 08:24:57.793833 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:24:57 crc kubenswrapper[4520]: I0130 08:24:57.794469 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:24:57 crc kubenswrapper[4520]: I0130 08:24:57.794542 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" Jan 30 08:24:57 crc kubenswrapper[4520]: I0130 08:24:57.795279 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"101a296d02a9bb848d5434571ec27275f07f863919cf3402452fde90fa8f2104"} pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 08:24:57 crc kubenswrapper[4520]: I0130 08:24:57.795345 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" containerID="cri-o://101a296d02a9bb848d5434571ec27275f07f863919cf3402452fde90fa8f2104" gracePeriod=600 Jan 30 08:24:57 crc kubenswrapper[4520]: E0130 08:24:57.914058 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:24:58 crc kubenswrapper[4520]: I0130 08:24:58.469329 4520 generic.go:334] "Generic (PLEG): container finished" podID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerID="101a296d02a9bb848d5434571ec27275f07f863919cf3402452fde90fa8f2104" exitCode=0 Jan 30 08:24:58 crc kubenswrapper[4520]: I0130 08:24:58.469387 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerDied","Data":"101a296d02a9bb848d5434571ec27275f07f863919cf3402452fde90fa8f2104"} Jan 30 08:24:58 crc kubenswrapper[4520]: I0130 08:24:58.469432 4520 scope.go:117] "RemoveContainer" containerID="a144a84e67748e984e28c7ea0d782f445f9dab8d6b3c1d91475dbed80ad97761" Jan 30 08:24:58 crc kubenswrapper[4520]: I0130 08:24:58.470656 4520 scope.go:117] "RemoveContainer" containerID="101a296d02a9bb848d5434571ec27275f07f863919cf3402452fde90fa8f2104" Jan 30 08:24:58 crc kubenswrapper[4520]: E0130 08:24:58.471202 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:25:12 crc kubenswrapper[4520]: I0130 08:25:12.686117 4520 scope.go:117] "RemoveContainer" containerID="101a296d02a9bb848d5434571ec27275f07f863919cf3402452fde90fa8f2104" Jan 30 08:25:12 crc kubenswrapper[4520]: E0130 08:25:12.687124 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:25:28 crc kubenswrapper[4520]: I0130 08:25:28.686268 4520 scope.go:117] "RemoveContainer" containerID="101a296d02a9bb848d5434571ec27275f07f863919cf3402452fde90fa8f2104" Jan 30 08:25:28 crc kubenswrapper[4520]: E0130 08:25:28.687171 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:25:43 crc kubenswrapper[4520]: I0130 08:25:43.686676 4520 scope.go:117] "RemoveContainer" containerID="101a296d02a9bb848d5434571ec27275f07f863919cf3402452fde90fa8f2104" Jan 30 08:25:43 crc kubenswrapper[4520]: E0130 08:25:43.687587 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:25:58 crc kubenswrapper[4520]: I0130 08:25:58.687041 4520 scope.go:117] "RemoveContainer" containerID="101a296d02a9bb848d5434571ec27275f07f863919cf3402452fde90fa8f2104" Jan 30 08:25:58 crc kubenswrapper[4520]: E0130 08:25:58.688333 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:26:13 crc kubenswrapper[4520]: I0130 08:26:13.686321 4520 scope.go:117] "RemoveContainer" containerID="101a296d02a9bb848d5434571ec27275f07f863919cf3402452fde90fa8f2104" Jan 30 08:26:13 crc kubenswrapper[4520]: E0130 08:26:13.687039 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:26:28 crc kubenswrapper[4520]: I0130 08:26:28.686698 4520 scope.go:117] "RemoveContainer" containerID="101a296d02a9bb848d5434571ec27275f07f863919cf3402452fde90fa8f2104" Jan 30 08:26:28 crc kubenswrapper[4520]: E0130 08:26:28.687433 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:26:40 crc kubenswrapper[4520]: I0130 08:26:40.685887 4520 scope.go:117] "RemoveContainer" containerID="101a296d02a9bb848d5434571ec27275f07f863919cf3402452fde90fa8f2104" Jan 30 08:26:40 crc kubenswrapper[4520]: E0130 08:26:40.686712 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:26:52 crc kubenswrapper[4520]: I0130 08:26:52.687140 4520 scope.go:117] "RemoveContainer" containerID="101a296d02a9bb848d5434571ec27275f07f863919cf3402452fde90fa8f2104" Jan 30 08:26:52 crc kubenswrapper[4520]: E0130 08:26:52.688412 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:27:05 crc kubenswrapper[4520]: I0130 08:27:05.685715 4520 scope.go:117] "RemoveContainer" containerID="101a296d02a9bb848d5434571ec27275f07f863919cf3402452fde90fa8f2104" Jan 30 08:27:05 crc kubenswrapper[4520]: E0130 08:27:05.687971 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:27:17 crc kubenswrapper[4520]: I0130 08:27:17.687510 4520 scope.go:117] "RemoveContainer" containerID="101a296d02a9bb848d5434571ec27275f07f863919cf3402452fde90fa8f2104" Jan 30 08:27:17 crc kubenswrapper[4520]: E0130 08:27:17.689023 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:27:30 crc kubenswrapper[4520]: I0130 08:27:30.685731 4520 scope.go:117] "RemoveContainer" containerID="101a296d02a9bb848d5434571ec27275f07f863919cf3402452fde90fa8f2104" Jan 30 08:27:30 crc kubenswrapper[4520]: E0130 08:27:30.686599 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:27:45 crc kubenswrapper[4520]: I0130 08:27:45.685634 4520 scope.go:117] "RemoveContainer" containerID="101a296d02a9bb848d5434571ec27275f07f863919cf3402452fde90fa8f2104" Jan 30 08:27:45 crc kubenswrapper[4520]: E0130 08:27:45.686625 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:27:58 crc kubenswrapper[4520]: I0130 08:27:58.685954 4520 scope.go:117] "RemoveContainer" containerID="101a296d02a9bb848d5434571ec27275f07f863919cf3402452fde90fa8f2104" Jan 30 08:27:58 crc kubenswrapper[4520]: E0130 08:27:58.686914 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:28:12 crc kubenswrapper[4520]: I0130 08:28:12.716763 4520 scope.go:117] "RemoveContainer" containerID="101a296d02a9bb848d5434571ec27275f07f863919cf3402452fde90fa8f2104" Jan 30 08:28:12 crc kubenswrapper[4520]: E0130 08:28:12.717934 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:28:23 crc kubenswrapper[4520]: I0130 08:28:23.685906 4520 scope.go:117] "RemoveContainer" containerID="101a296d02a9bb848d5434571ec27275f07f863919cf3402452fde90fa8f2104" Jan 30 08:28:23 crc kubenswrapper[4520]: E0130 08:28:23.686871 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:28:29 crc kubenswrapper[4520]: I0130 08:28:29.769081 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qwvz6"] Jan 30 08:28:29 crc kubenswrapper[4520]: E0130 08:28:29.770687 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66ab083e-b2e1-4f06-a5f1-fa26de1442b0" containerName="extract-utilities" Jan 30 08:28:29 crc kubenswrapper[4520]: I0130 08:28:29.770992 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="66ab083e-b2e1-4f06-a5f1-fa26de1442b0" containerName="extract-utilities" Jan 30 08:28:29 crc kubenswrapper[4520]: E0130 08:28:29.771078 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66ab083e-b2e1-4f06-a5f1-fa26de1442b0" containerName="extract-content" Jan 30 08:28:29 crc kubenswrapper[4520]: I0130 08:28:29.771132 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="66ab083e-b2e1-4f06-a5f1-fa26de1442b0" containerName="extract-content" Jan 30 08:28:29 crc kubenswrapper[4520]: E0130 08:28:29.771195 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66ab083e-b2e1-4f06-a5f1-fa26de1442b0" containerName="registry-server" Jan 30 08:28:29 crc kubenswrapper[4520]: I0130 08:28:29.771244 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="66ab083e-b2e1-4f06-a5f1-fa26de1442b0" containerName="registry-server" Jan 30 08:28:29 crc kubenswrapper[4520]: I0130 08:28:29.771562 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="66ab083e-b2e1-4f06-a5f1-fa26de1442b0" containerName="registry-server" Jan 30 08:28:29 crc kubenswrapper[4520]: I0130 08:28:29.772949 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwvz6" Jan 30 08:28:29 crc kubenswrapper[4520]: I0130 08:28:29.777669 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwvz6"] Jan 30 08:28:29 crc kubenswrapper[4520]: I0130 08:28:29.816429 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5824fe4-0315-467e-901a-85063c1fcdb9-utilities\") pod \"redhat-marketplace-qwvz6\" (UID: \"c5824fe4-0315-467e-901a-85063c1fcdb9\") " pod="openshift-marketplace/redhat-marketplace-qwvz6" Jan 30 08:28:29 crc kubenswrapper[4520]: I0130 08:28:29.816775 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpn4j\" (UniqueName: \"kubernetes.io/projected/c5824fe4-0315-467e-901a-85063c1fcdb9-kube-api-access-hpn4j\") pod \"redhat-marketplace-qwvz6\" (UID: \"c5824fe4-0315-467e-901a-85063c1fcdb9\") " pod="openshift-marketplace/redhat-marketplace-qwvz6" Jan 30 08:28:29 crc kubenswrapper[4520]: I0130 08:28:29.817254 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5824fe4-0315-467e-901a-85063c1fcdb9-catalog-content\") pod \"redhat-marketplace-qwvz6\" (UID: \"c5824fe4-0315-467e-901a-85063c1fcdb9\") " pod="openshift-marketplace/redhat-marketplace-qwvz6" Jan 30 08:28:29 crc kubenswrapper[4520]: I0130 08:28:29.919291 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpn4j\" (UniqueName: \"kubernetes.io/projected/c5824fe4-0315-467e-901a-85063c1fcdb9-kube-api-access-hpn4j\") pod \"redhat-marketplace-qwvz6\" (UID: \"c5824fe4-0315-467e-901a-85063c1fcdb9\") " pod="openshift-marketplace/redhat-marketplace-qwvz6" Jan 30 08:28:29 crc kubenswrapper[4520]: I0130 08:28:29.919420 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5824fe4-0315-467e-901a-85063c1fcdb9-catalog-content\") pod \"redhat-marketplace-qwvz6\" (UID: \"c5824fe4-0315-467e-901a-85063c1fcdb9\") " pod="openshift-marketplace/redhat-marketplace-qwvz6" Jan 30 08:28:29 crc kubenswrapper[4520]: I0130 08:28:29.919479 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5824fe4-0315-467e-901a-85063c1fcdb9-utilities\") pod \"redhat-marketplace-qwvz6\" (UID: \"c5824fe4-0315-467e-901a-85063c1fcdb9\") " pod="openshift-marketplace/redhat-marketplace-qwvz6" Jan 30 08:28:29 crc kubenswrapper[4520]: I0130 08:28:29.920041 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5824fe4-0315-467e-901a-85063c1fcdb9-catalog-content\") pod \"redhat-marketplace-qwvz6\" (UID: \"c5824fe4-0315-467e-901a-85063c1fcdb9\") " pod="openshift-marketplace/redhat-marketplace-qwvz6" Jan 30 08:28:29 crc kubenswrapper[4520]: I0130 08:28:29.920045 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5824fe4-0315-467e-901a-85063c1fcdb9-utilities\") pod \"redhat-marketplace-qwvz6\" (UID: \"c5824fe4-0315-467e-901a-85063c1fcdb9\") " pod="openshift-marketplace/redhat-marketplace-qwvz6" Jan 30 08:28:29 crc kubenswrapper[4520]: I0130 08:28:29.944203 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpn4j\" (UniqueName: \"kubernetes.io/projected/c5824fe4-0315-467e-901a-85063c1fcdb9-kube-api-access-hpn4j\") pod \"redhat-marketplace-qwvz6\" (UID: \"c5824fe4-0315-467e-901a-85063c1fcdb9\") " pod="openshift-marketplace/redhat-marketplace-qwvz6" Jan 30 08:28:30 crc kubenswrapper[4520]: I0130 08:28:30.097610 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwvz6" Jan 30 08:28:30 crc kubenswrapper[4520]: I0130 08:28:30.562127 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwvz6"] Jan 30 08:28:31 crc kubenswrapper[4520]: I0130 08:28:31.452886 4520 generic.go:334] "Generic (PLEG): container finished" podID="c5824fe4-0315-467e-901a-85063c1fcdb9" containerID="a4e07a31f265e6ca1c8087874ed5ff1d07fcb25a5dcb99f8742f0be52e208e12" exitCode=0 Jan 30 08:28:31 crc kubenswrapper[4520]: I0130 08:28:31.452992 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwvz6" event={"ID":"c5824fe4-0315-467e-901a-85063c1fcdb9","Type":"ContainerDied","Data":"a4e07a31f265e6ca1c8087874ed5ff1d07fcb25a5dcb99f8742f0be52e208e12"} Jan 30 08:28:31 crc kubenswrapper[4520]: I0130 08:28:31.454120 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwvz6" event={"ID":"c5824fe4-0315-467e-901a-85063c1fcdb9","Type":"ContainerStarted","Data":"c7f8b7eed94838d3bae07f79ba1303d7c721ad6cb09aa530ad5275fa966fc8f7"} Jan 30 08:28:31 crc kubenswrapper[4520]: I0130 08:28:31.455546 4520 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 08:28:32 crc kubenswrapper[4520]: I0130 08:28:32.463218 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwvz6" event={"ID":"c5824fe4-0315-467e-901a-85063c1fcdb9","Type":"ContainerStarted","Data":"a6df6ee7ec51ea6e68d9b8cb941e47259be5dadfabbd6d6e0feeea9b129ab2e3"} Jan 30 08:28:33 crc kubenswrapper[4520]: I0130 08:28:33.474832 4520 generic.go:334] "Generic (PLEG): container finished" podID="c5824fe4-0315-467e-901a-85063c1fcdb9" containerID="a6df6ee7ec51ea6e68d9b8cb941e47259be5dadfabbd6d6e0feeea9b129ab2e3" exitCode=0 Jan 30 08:28:33 crc kubenswrapper[4520]: I0130 08:28:33.474894 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwvz6" event={"ID":"c5824fe4-0315-467e-901a-85063c1fcdb9","Type":"ContainerDied","Data":"a6df6ee7ec51ea6e68d9b8cb941e47259be5dadfabbd6d6e0feeea9b129ab2e3"} Jan 30 08:28:34 crc kubenswrapper[4520]: I0130 08:28:34.485845 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwvz6" event={"ID":"c5824fe4-0315-467e-901a-85063c1fcdb9","Type":"ContainerStarted","Data":"1bcfa06c275003685dc2abba033fe4d2087b99ed300150414d0e312ce42e2210"} Jan 30 08:28:34 crc kubenswrapper[4520]: I0130 08:28:34.505674 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qwvz6" podStartSLOduration=2.923200343 podStartE2EDuration="5.505654296s" podCreationTimestamp="2026-01-30 08:28:29 +0000 UTC" firstStartedPulling="2026-01-30 08:28:31.45527152 +0000 UTC m=+6225.083623701" lastFinishedPulling="2026-01-30 08:28:34.037725473 +0000 UTC m=+6227.666077654" observedRunningTime="2026-01-30 08:28:34.503197406 +0000 UTC m=+6228.131549587" watchObservedRunningTime="2026-01-30 08:28:34.505654296 +0000 UTC m=+6228.134006476" Jan 30 08:28:36 crc kubenswrapper[4520]: I0130 08:28:36.691369 4520 scope.go:117] "RemoveContainer" containerID="101a296d02a9bb848d5434571ec27275f07f863919cf3402452fde90fa8f2104" Jan 30 08:28:36 crc kubenswrapper[4520]: E0130 08:28:36.691882 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:28:40 crc kubenswrapper[4520]: I0130 08:28:40.098702 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qwvz6" Jan 30 08:28:40 crc kubenswrapper[4520]: I0130 08:28:40.100181 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qwvz6" Jan 30 08:28:40 crc kubenswrapper[4520]: I0130 08:28:40.142040 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qwvz6" Jan 30 08:28:40 crc kubenswrapper[4520]: I0130 08:28:40.587980 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qwvz6" Jan 30 08:28:40 crc kubenswrapper[4520]: I0130 08:28:40.635760 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwvz6"] Jan 30 08:28:42 crc kubenswrapper[4520]: I0130 08:28:42.569491 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qwvz6" podUID="c5824fe4-0315-467e-901a-85063c1fcdb9" containerName="registry-server" containerID="cri-o://1bcfa06c275003685dc2abba033fe4d2087b99ed300150414d0e312ce42e2210" gracePeriod=2 Jan 30 08:28:43 crc kubenswrapper[4520]: I0130 08:28:43.016124 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwvz6" Jan 30 08:28:43 crc kubenswrapper[4520]: I0130 08:28:43.118137 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpn4j\" (UniqueName: \"kubernetes.io/projected/c5824fe4-0315-467e-901a-85063c1fcdb9-kube-api-access-hpn4j\") pod \"c5824fe4-0315-467e-901a-85063c1fcdb9\" (UID: \"c5824fe4-0315-467e-901a-85063c1fcdb9\") " Jan 30 08:28:43 crc kubenswrapper[4520]: I0130 08:28:43.118356 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5824fe4-0315-467e-901a-85063c1fcdb9-utilities\") pod \"c5824fe4-0315-467e-901a-85063c1fcdb9\" (UID: \"c5824fe4-0315-467e-901a-85063c1fcdb9\") " Jan 30 08:28:43 crc kubenswrapper[4520]: I0130 08:28:43.118424 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5824fe4-0315-467e-901a-85063c1fcdb9-catalog-content\") pod \"c5824fe4-0315-467e-901a-85063c1fcdb9\" (UID: \"c5824fe4-0315-467e-901a-85063c1fcdb9\") " Jan 30 08:28:43 crc kubenswrapper[4520]: I0130 08:28:43.119318 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5824fe4-0315-467e-901a-85063c1fcdb9-utilities" (OuterVolumeSpecName: "utilities") pod "c5824fe4-0315-467e-901a-85063c1fcdb9" (UID: "c5824fe4-0315-467e-901a-85063c1fcdb9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:28:43 crc kubenswrapper[4520]: I0130 08:28:43.129079 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5824fe4-0315-467e-901a-85063c1fcdb9-kube-api-access-hpn4j" (OuterVolumeSpecName: "kube-api-access-hpn4j") pod "c5824fe4-0315-467e-901a-85063c1fcdb9" (UID: "c5824fe4-0315-467e-901a-85063c1fcdb9"). InnerVolumeSpecName "kube-api-access-hpn4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:28:43 crc kubenswrapper[4520]: I0130 08:28:43.142061 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5824fe4-0315-467e-901a-85063c1fcdb9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c5824fe4-0315-467e-901a-85063c1fcdb9" (UID: "c5824fe4-0315-467e-901a-85063c1fcdb9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:28:43 crc kubenswrapper[4520]: I0130 08:28:43.221325 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5824fe4-0315-467e-901a-85063c1fcdb9-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:43 crc kubenswrapper[4520]: I0130 08:28:43.221360 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5824fe4-0315-467e-901a-85063c1fcdb9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:43 crc kubenswrapper[4520]: I0130 08:28:43.221376 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpn4j\" (UniqueName: \"kubernetes.io/projected/c5824fe4-0315-467e-901a-85063c1fcdb9-kube-api-access-hpn4j\") on node \"crc\" DevicePath \"\"" Jan 30 08:28:43 crc kubenswrapper[4520]: I0130 08:28:43.588066 4520 generic.go:334] "Generic (PLEG): container finished" podID="c5824fe4-0315-467e-901a-85063c1fcdb9" containerID="1bcfa06c275003685dc2abba033fe4d2087b99ed300150414d0e312ce42e2210" exitCode=0 Jan 30 08:28:43 crc kubenswrapper[4520]: I0130 08:28:43.588119 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwvz6" event={"ID":"c5824fe4-0315-467e-901a-85063c1fcdb9","Type":"ContainerDied","Data":"1bcfa06c275003685dc2abba033fe4d2087b99ed300150414d0e312ce42e2210"} Jan 30 08:28:43 crc kubenswrapper[4520]: I0130 08:28:43.588155 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwvz6" event={"ID":"c5824fe4-0315-467e-901a-85063c1fcdb9","Type":"ContainerDied","Data":"c7f8b7eed94838d3bae07f79ba1303d7c721ad6cb09aa530ad5275fa966fc8f7"} Jan 30 08:28:43 crc kubenswrapper[4520]: I0130 08:28:43.588180 4520 scope.go:117] "RemoveContainer" containerID="1bcfa06c275003685dc2abba033fe4d2087b99ed300150414d0e312ce42e2210" Jan 30 08:28:43 crc kubenswrapper[4520]: I0130 08:28:43.588378 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwvz6" Jan 30 08:28:43 crc kubenswrapper[4520]: I0130 08:28:43.622963 4520 scope.go:117] "RemoveContainer" containerID="a6df6ee7ec51ea6e68d9b8cb941e47259be5dadfabbd6d6e0feeea9b129ab2e3" Jan 30 08:28:43 crc kubenswrapper[4520]: I0130 08:28:43.627397 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwvz6"] Jan 30 08:28:43 crc kubenswrapper[4520]: I0130 08:28:43.641333 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwvz6"] Jan 30 08:28:43 crc kubenswrapper[4520]: I0130 08:28:43.648424 4520 scope.go:117] "RemoveContainer" containerID="a4e07a31f265e6ca1c8087874ed5ff1d07fcb25a5dcb99f8742f0be52e208e12" Jan 30 08:28:43 crc kubenswrapper[4520]: I0130 08:28:43.701114 4520 scope.go:117] "RemoveContainer" containerID="1bcfa06c275003685dc2abba033fe4d2087b99ed300150414d0e312ce42e2210" Jan 30 08:28:43 crc kubenswrapper[4520]: E0130 08:28:43.701655 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bcfa06c275003685dc2abba033fe4d2087b99ed300150414d0e312ce42e2210\": container with ID starting with 1bcfa06c275003685dc2abba033fe4d2087b99ed300150414d0e312ce42e2210 not found: ID does not exist" containerID="1bcfa06c275003685dc2abba033fe4d2087b99ed300150414d0e312ce42e2210" Jan 30 08:28:43 crc kubenswrapper[4520]: I0130 08:28:43.701703 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bcfa06c275003685dc2abba033fe4d2087b99ed300150414d0e312ce42e2210"} err="failed to get container status \"1bcfa06c275003685dc2abba033fe4d2087b99ed300150414d0e312ce42e2210\": rpc error: code = NotFound desc = could not find container \"1bcfa06c275003685dc2abba033fe4d2087b99ed300150414d0e312ce42e2210\": container with ID starting with 1bcfa06c275003685dc2abba033fe4d2087b99ed300150414d0e312ce42e2210 not found: ID does not exist" Jan 30 08:28:43 crc kubenswrapper[4520]: I0130 08:28:43.701733 4520 scope.go:117] "RemoveContainer" containerID="a6df6ee7ec51ea6e68d9b8cb941e47259be5dadfabbd6d6e0feeea9b129ab2e3" Jan 30 08:28:43 crc kubenswrapper[4520]: E0130 08:28:43.702143 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6df6ee7ec51ea6e68d9b8cb941e47259be5dadfabbd6d6e0feeea9b129ab2e3\": container with ID starting with a6df6ee7ec51ea6e68d9b8cb941e47259be5dadfabbd6d6e0feeea9b129ab2e3 not found: ID does not exist" containerID="a6df6ee7ec51ea6e68d9b8cb941e47259be5dadfabbd6d6e0feeea9b129ab2e3" Jan 30 08:28:43 crc kubenswrapper[4520]: I0130 08:28:43.702165 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6df6ee7ec51ea6e68d9b8cb941e47259be5dadfabbd6d6e0feeea9b129ab2e3"} err="failed to get container status \"a6df6ee7ec51ea6e68d9b8cb941e47259be5dadfabbd6d6e0feeea9b129ab2e3\": rpc error: code = NotFound desc = could not find container \"a6df6ee7ec51ea6e68d9b8cb941e47259be5dadfabbd6d6e0feeea9b129ab2e3\": container with ID starting with a6df6ee7ec51ea6e68d9b8cb941e47259be5dadfabbd6d6e0feeea9b129ab2e3 not found: ID does not exist" Jan 30 08:28:43 crc kubenswrapper[4520]: I0130 08:28:43.702179 4520 scope.go:117] "RemoveContainer" containerID="a4e07a31f265e6ca1c8087874ed5ff1d07fcb25a5dcb99f8742f0be52e208e12" Jan 30 08:28:43 crc kubenswrapper[4520]: E0130 08:28:43.702431 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4e07a31f265e6ca1c8087874ed5ff1d07fcb25a5dcb99f8742f0be52e208e12\": container with ID starting with a4e07a31f265e6ca1c8087874ed5ff1d07fcb25a5dcb99f8742f0be52e208e12 not found: ID does not exist" containerID="a4e07a31f265e6ca1c8087874ed5ff1d07fcb25a5dcb99f8742f0be52e208e12" Jan 30 08:28:43 crc kubenswrapper[4520]: I0130 08:28:43.702449 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4e07a31f265e6ca1c8087874ed5ff1d07fcb25a5dcb99f8742f0be52e208e12"} err="failed to get container status \"a4e07a31f265e6ca1c8087874ed5ff1d07fcb25a5dcb99f8742f0be52e208e12\": rpc error: code = NotFound desc = could not find container \"a4e07a31f265e6ca1c8087874ed5ff1d07fcb25a5dcb99f8742f0be52e208e12\": container with ID starting with a4e07a31f265e6ca1c8087874ed5ff1d07fcb25a5dcb99f8742f0be52e208e12 not found: ID does not exist" Jan 30 08:28:44 crc kubenswrapper[4520]: I0130 08:28:44.696933 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5824fe4-0315-467e-901a-85063c1fcdb9" path="/var/lib/kubelet/pods/c5824fe4-0315-467e-901a-85063c1fcdb9/volumes" Jan 30 08:28:51 crc kubenswrapper[4520]: I0130 08:28:51.686093 4520 scope.go:117] "RemoveContainer" containerID="101a296d02a9bb848d5434571ec27275f07f863919cf3402452fde90fa8f2104" Jan 30 08:28:51 crc kubenswrapper[4520]: E0130 08:28:51.686832 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:29:06 crc kubenswrapper[4520]: I0130 08:29:06.692029 4520 scope.go:117] "RemoveContainer" containerID="101a296d02a9bb848d5434571ec27275f07f863919cf3402452fde90fa8f2104" Jan 30 08:29:06 crc kubenswrapper[4520]: E0130 08:29:06.693991 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:29:20 crc kubenswrapper[4520]: I0130 08:29:20.686011 4520 scope.go:117] "RemoveContainer" containerID="101a296d02a9bb848d5434571ec27275f07f863919cf3402452fde90fa8f2104" Jan 30 08:29:20 crc kubenswrapper[4520]: E0130 08:29:20.686858 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:29:32 crc kubenswrapper[4520]: I0130 08:29:32.686112 4520 scope.go:117] "RemoveContainer" containerID="101a296d02a9bb848d5434571ec27275f07f863919cf3402452fde90fa8f2104" Jan 30 08:29:32 crc kubenswrapper[4520]: E0130 08:29:32.686966 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:29:45 crc kubenswrapper[4520]: I0130 08:29:45.686762 4520 scope.go:117] "RemoveContainer" containerID="101a296d02a9bb848d5434571ec27275f07f863919cf3402452fde90fa8f2104" Jan 30 08:29:45 crc kubenswrapper[4520]: E0130 08:29:45.687710 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:29:57 crc kubenswrapper[4520]: I0130 08:29:57.685682 4520 scope.go:117] "RemoveContainer" containerID="101a296d02a9bb848d5434571ec27275f07f863919cf3402452fde90fa8f2104" Jan 30 08:29:57 crc kubenswrapper[4520]: E0130 08:29:57.686646 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:30:00 crc kubenswrapper[4520]: I0130 08:30:00.154314 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496030-6p4rn"] Jan 30 08:30:00 crc kubenswrapper[4520]: E0130 08:30:00.155299 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5824fe4-0315-467e-901a-85063c1fcdb9" containerName="registry-server" Jan 30 08:30:00 crc kubenswrapper[4520]: I0130 08:30:00.155318 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5824fe4-0315-467e-901a-85063c1fcdb9" containerName="registry-server" Jan 30 08:30:00 crc kubenswrapper[4520]: E0130 08:30:00.155334 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5824fe4-0315-467e-901a-85063c1fcdb9" containerName="extract-utilities" Jan 30 08:30:00 crc kubenswrapper[4520]: I0130 08:30:00.155340 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5824fe4-0315-467e-901a-85063c1fcdb9" containerName="extract-utilities" Jan 30 08:30:00 crc kubenswrapper[4520]: E0130 08:30:00.155390 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5824fe4-0315-467e-901a-85063c1fcdb9" containerName="extract-content" Jan 30 08:30:00 crc kubenswrapper[4520]: I0130 08:30:00.155397 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5824fe4-0315-467e-901a-85063c1fcdb9" containerName="extract-content" Jan 30 08:30:00 crc kubenswrapper[4520]: I0130 08:30:00.155616 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5824fe4-0315-467e-901a-85063c1fcdb9" containerName="registry-server" Jan 30 08:30:00 crc kubenswrapper[4520]: I0130 08:30:00.156578 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-6p4rn" Jan 30 08:30:00 crc kubenswrapper[4520]: I0130 08:30:00.171032 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 08:30:00 crc kubenswrapper[4520]: I0130 08:30:00.171080 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 08:30:00 crc kubenswrapper[4520]: I0130 08:30:00.181737 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496030-6p4rn"] Jan 30 08:30:00 crc kubenswrapper[4520]: I0130 08:30:00.290417 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1d017b2-ee9b-43c5-ba54-98b2ef102009-config-volume\") pod \"collect-profiles-29496030-6p4rn\" (UID: \"c1d017b2-ee9b-43c5-ba54-98b2ef102009\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-6p4rn" Jan 30 08:30:00 crc kubenswrapper[4520]: I0130 08:30:00.290544 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c1d017b2-ee9b-43c5-ba54-98b2ef102009-secret-volume\") pod \"collect-profiles-29496030-6p4rn\" (UID: \"c1d017b2-ee9b-43c5-ba54-98b2ef102009\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-6p4rn" Jan 30 08:30:00 crc kubenswrapper[4520]: I0130 08:30:00.290579 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7tjz\" (UniqueName: \"kubernetes.io/projected/c1d017b2-ee9b-43c5-ba54-98b2ef102009-kube-api-access-q7tjz\") pod \"collect-profiles-29496030-6p4rn\" (UID: \"c1d017b2-ee9b-43c5-ba54-98b2ef102009\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-6p4rn" Jan 30 08:30:00 crc kubenswrapper[4520]: I0130 08:30:00.393386 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1d017b2-ee9b-43c5-ba54-98b2ef102009-config-volume\") pod \"collect-profiles-29496030-6p4rn\" (UID: \"c1d017b2-ee9b-43c5-ba54-98b2ef102009\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-6p4rn" Jan 30 08:30:00 crc kubenswrapper[4520]: I0130 08:30:00.393618 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c1d017b2-ee9b-43c5-ba54-98b2ef102009-secret-volume\") pod \"collect-profiles-29496030-6p4rn\" (UID: \"c1d017b2-ee9b-43c5-ba54-98b2ef102009\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-6p4rn" Jan 30 08:30:00 crc kubenswrapper[4520]: I0130 08:30:00.393664 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7tjz\" (UniqueName: \"kubernetes.io/projected/c1d017b2-ee9b-43c5-ba54-98b2ef102009-kube-api-access-q7tjz\") pod \"collect-profiles-29496030-6p4rn\" (UID: \"c1d017b2-ee9b-43c5-ba54-98b2ef102009\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-6p4rn" Jan 30 08:30:00 crc kubenswrapper[4520]: I0130 08:30:00.394401 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1d017b2-ee9b-43c5-ba54-98b2ef102009-config-volume\") pod \"collect-profiles-29496030-6p4rn\" (UID: \"c1d017b2-ee9b-43c5-ba54-98b2ef102009\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-6p4rn" Jan 30 08:30:00 crc kubenswrapper[4520]: I0130 08:30:00.401118 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c1d017b2-ee9b-43c5-ba54-98b2ef102009-secret-volume\") pod \"collect-profiles-29496030-6p4rn\" (UID: \"c1d017b2-ee9b-43c5-ba54-98b2ef102009\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-6p4rn" Jan 30 08:30:00 crc kubenswrapper[4520]: I0130 08:30:00.410973 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7tjz\" (UniqueName: \"kubernetes.io/projected/c1d017b2-ee9b-43c5-ba54-98b2ef102009-kube-api-access-q7tjz\") pod \"collect-profiles-29496030-6p4rn\" (UID: \"c1d017b2-ee9b-43c5-ba54-98b2ef102009\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-6p4rn" Jan 30 08:30:00 crc kubenswrapper[4520]: I0130 08:30:00.476589 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-6p4rn" Jan 30 08:30:00 crc kubenswrapper[4520]: I0130 08:30:00.916108 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496030-6p4rn"] Jan 30 08:30:01 crc kubenswrapper[4520]: I0130 08:30:01.362538 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-6p4rn" event={"ID":"c1d017b2-ee9b-43c5-ba54-98b2ef102009","Type":"ContainerStarted","Data":"2ef9f8f5075483725a19c2c6fd4210d88d8b513d85a754dfbadd24b901781e48"} Jan 30 08:30:01 crc kubenswrapper[4520]: I0130 08:30:01.362872 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-6p4rn" event={"ID":"c1d017b2-ee9b-43c5-ba54-98b2ef102009","Type":"ContainerStarted","Data":"eb6c81cd8538db7ebba482372479f9afaf0ceb8b3b6d357787e72f2023bebec1"} Jan 30 08:30:01 crc kubenswrapper[4520]: I0130 08:30:01.379273 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-6p4rn" podStartSLOduration=1.3792551419999999 podStartE2EDuration="1.379255142s" podCreationTimestamp="2026-01-30 08:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:30:01.373915931 +0000 UTC m=+6315.002268113" watchObservedRunningTime="2026-01-30 08:30:01.379255142 +0000 UTC m=+6315.007607322" Jan 30 08:30:02 crc kubenswrapper[4520]: I0130 08:30:02.373558 4520 generic.go:334] "Generic (PLEG): container finished" podID="c1d017b2-ee9b-43c5-ba54-98b2ef102009" containerID="2ef9f8f5075483725a19c2c6fd4210d88d8b513d85a754dfbadd24b901781e48" exitCode=0 Jan 30 08:30:02 crc kubenswrapper[4520]: I0130 08:30:02.373750 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-6p4rn" event={"ID":"c1d017b2-ee9b-43c5-ba54-98b2ef102009","Type":"ContainerDied","Data":"2ef9f8f5075483725a19c2c6fd4210d88d8b513d85a754dfbadd24b901781e48"} Jan 30 08:30:03 crc kubenswrapper[4520]: I0130 08:30:03.692833 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-6p4rn" Jan 30 08:30:03 crc kubenswrapper[4520]: I0130 08:30:03.876877 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c1d017b2-ee9b-43c5-ba54-98b2ef102009-secret-volume\") pod \"c1d017b2-ee9b-43c5-ba54-98b2ef102009\" (UID: \"c1d017b2-ee9b-43c5-ba54-98b2ef102009\") " Jan 30 08:30:03 crc kubenswrapper[4520]: I0130 08:30:03.877024 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1d017b2-ee9b-43c5-ba54-98b2ef102009-config-volume\") pod \"c1d017b2-ee9b-43c5-ba54-98b2ef102009\" (UID: \"c1d017b2-ee9b-43c5-ba54-98b2ef102009\") " Jan 30 08:30:03 crc kubenswrapper[4520]: I0130 08:30:03.877288 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7tjz\" (UniqueName: \"kubernetes.io/projected/c1d017b2-ee9b-43c5-ba54-98b2ef102009-kube-api-access-q7tjz\") pod \"c1d017b2-ee9b-43c5-ba54-98b2ef102009\" (UID: \"c1d017b2-ee9b-43c5-ba54-98b2ef102009\") " Jan 30 08:30:03 crc kubenswrapper[4520]: I0130 08:30:03.878050 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1d017b2-ee9b-43c5-ba54-98b2ef102009-config-volume" (OuterVolumeSpecName: "config-volume") pod "c1d017b2-ee9b-43c5-ba54-98b2ef102009" (UID: "c1d017b2-ee9b-43c5-ba54-98b2ef102009"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:03 crc kubenswrapper[4520]: I0130 08:30:03.885894 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1d017b2-ee9b-43c5-ba54-98b2ef102009-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c1d017b2-ee9b-43c5-ba54-98b2ef102009" (UID: "c1d017b2-ee9b-43c5-ba54-98b2ef102009"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:03 crc kubenswrapper[4520]: I0130 08:30:03.887209 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1d017b2-ee9b-43c5-ba54-98b2ef102009-kube-api-access-q7tjz" (OuterVolumeSpecName: "kube-api-access-q7tjz") pod "c1d017b2-ee9b-43c5-ba54-98b2ef102009" (UID: "c1d017b2-ee9b-43c5-ba54-98b2ef102009"). InnerVolumeSpecName "kube-api-access-q7tjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:03 crc kubenswrapper[4520]: I0130 08:30:03.981184 4520 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c1d017b2-ee9b-43c5-ba54-98b2ef102009-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:03 crc kubenswrapper[4520]: I0130 08:30:03.981545 4520 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1d017b2-ee9b-43c5-ba54-98b2ef102009-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:03 crc kubenswrapper[4520]: I0130 08:30:03.981617 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7tjz\" (UniqueName: \"kubernetes.io/projected/c1d017b2-ee9b-43c5-ba54-98b2ef102009-kube-api-access-q7tjz\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:04 crc kubenswrapper[4520]: I0130 08:30:04.396113 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-6p4rn" event={"ID":"c1d017b2-ee9b-43c5-ba54-98b2ef102009","Type":"ContainerDied","Data":"eb6c81cd8538db7ebba482372479f9afaf0ceb8b3b6d357787e72f2023bebec1"} Jan 30 08:30:04 crc kubenswrapper[4520]: I0130 08:30:04.396215 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-6p4rn" Jan 30 08:30:04 crc kubenswrapper[4520]: I0130 08:30:04.396167 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb6c81cd8538db7ebba482372479f9afaf0ceb8b3b6d357787e72f2023bebec1" Jan 30 08:30:04 crc kubenswrapper[4520]: I0130 08:30:04.455243 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495985-9dwtq"] Jan 30 08:30:04 crc kubenswrapper[4520]: I0130 08:30:04.461576 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495985-9dwtq"] Jan 30 08:30:04 crc kubenswrapper[4520]: I0130 08:30:04.697823 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b35994b-b093-47d9-870f-207a997a2017" path="/var/lib/kubelet/pods/4b35994b-b093-47d9-870f-207a997a2017/volumes" Jan 30 08:30:11 crc kubenswrapper[4520]: I0130 08:30:11.686950 4520 scope.go:117] "RemoveContainer" containerID="101a296d02a9bb848d5434571ec27275f07f863919cf3402452fde90fa8f2104" Jan 30 08:30:12 crc kubenswrapper[4520]: I0130 08:30:12.474417 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerStarted","Data":"2fc9cf35ae77138b80221759391497b0489b40c426e1fe4f09e0a678cfc05df8"} Jan 30 08:30:58 crc kubenswrapper[4520]: I0130 08:30:58.296079 4520 scope.go:117] "RemoveContainer" containerID="89f95c983402e2c9180cb55f0e06d08bb623337186328d11eff57457328c9284" Jan 30 08:32:27 crc kubenswrapper[4520]: I0130 08:32:27.793358 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:32:27 crc kubenswrapper[4520]: I0130 08:32:27.793913 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:32:31 crc kubenswrapper[4520]: I0130 08:32:31.517152 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-7fc4c495dc-4wmrl" podUID="17fafdee-9ab2-479b-85e0-96e3ef98daa8" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 30 08:32:57 crc kubenswrapper[4520]: I0130 08:32:57.793770 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:32:57 crc kubenswrapper[4520]: I0130 08:32:57.794378 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:33:27 crc kubenswrapper[4520]: I0130 08:33:27.793862 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:33:27 crc kubenswrapper[4520]: I0130 08:33:27.794258 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:33:27 crc kubenswrapper[4520]: I0130 08:33:27.794302 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" Jan 30 08:33:27 crc kubenswrapper[4520]: I0130 08:33:27.794805 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2fc9cf35ae77138b80221759391497b0489b40c426e1fe4f09e0a678cfc05df8"} pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 08:33:27 crc kubenswrapper[4520]: I0130 08:33:27.794849 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" containerID="cri-o://2fc9cf35ae77138b80221759391497b0489b40c426e1fe4f09e0a678cfc05df8" gracePeriod=600 Jan 30 08:33:28 crc kubenswrapper[4520]: I0130 08:33:28.307140 4520 generic.go:334] "Generic (PLEG): container finished" podID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerID="2fc9cf35ae77138b80221759391497b0489b40c426e1fe4f09e0a678cfc05df8" exitCode=0 Jan 30 08:33:28 crc kubenswrapper[4520]: I0130 08:33:28.307193 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerDied","Data":"2fc9cf35ae77138b80221759391497b0489b40c426e1fe4f09e0a678cfc05df8"} Jan 30 08:33:28 crc kubenswrapper[4520]: I0130 08:33:28.307561 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerStarted","Data":"6f73c66e63b3513012bf2229d38f1a3e0abca4bb0f8764b9d9fe057834d99863"} Jan 30 08:33:28 crc kubenswrapper[4520]: I0130 08:33:28.307588 4520 scope.go:117] "RemoveContainer" containerID="101a296d02a9bb848d5434571ec27275f07f863919cf3402452fde90fa8f2104" Jan 30 08:35:57 crc kubenswrapper[4520]: I0130 08:35:57.793318 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:35:57 crc kubenswrapper[4520]: I0130 08:35:57.794642 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:36:27 crc kubenswrapper[4520]: I0130 08:36:27.793687 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:36:27 crc kubenswrapper[4520]: I0130 08:36:27.794334 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:36:57 crc kubenswrapper[4520]: I0130 08:36:57.793171 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:36:57 crc kubenswrapper[4520]: I0130 08:36:57.793821 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:36:57 crc kubenswrapper[4520]: I0130 08:36:57.793862 4520 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" Jan 30 08:36:57 crc kubenswrapper[4520]: I0130 08:36:57.794356 4520 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6f73c66e63b3513012bf2229d38f1a3e0abca4bb0f8764b9d9fe057834d99863"} pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 08:36:57 crc kubenswrapper[4520]: I0130 08:36:57.794410 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" containerID="cri-o://6f73c66e63b3513012bf2229d38f1a3e0abca4bb0f8764b9d9fe057834d99863" gracePeriod=600 Jan 30 08:36:57 crc kubenswrapper[4520]: E0130 08:36:57.924896 4520 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5f51275_c0b1_4467_bf4a_ef848e3521df.slice/crio-6f73c66e63b3513012bf2229d38f1a3e0abca4bb0f8764b9d9fe057834d99863.scope\": RecentStats: unable to find data in memory cache]" Jan 30 08:36:57 crc kubenswrapper[4520]: E0130 08:36:57.926842 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:36:58 crc kubenswrapper[4520]: I0130 08:36:58.112421 4520 generic.go:334] "Generic (PLEG): container finished" podID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerID="6f73c66e63b3513012bf2229d38f1a3e0abca4bb0f8764b9d9fe057834d99863" exitCode=0 Jan 30 08:36:58 crc kubenswrapper[4520]: I0130 08:36:58.112504 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerDied","Data":"6f73c66e63b3513012bf2229d38f1a3e0abca4bb0f8764b9d9fe057834d99863"} Jan 30 08:36:58 crc kubenswrapper[4520]: I0130 08:36:58.112822 4520 scope.go:117] "RemoveContainer" containerID="2fc9cf35ae77138b80221759391497b0489b40c426e1fe4f09e0a678cfc05df8" Jan 30 08:36:58 crc kubenswrapper[4520]: I0130 08:36:58.114835 4520 scope.go:117] "RemoveContainer" containerID="6f73c66e63b3513012bf2229d38f1a3e0abca4bb0f8764b9d9fe057834d99863" Jan 30 08:36:58 crc kubenswrapper[4520]: E0130 08:36:58.115464 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:36:59 crc kubenswrapper[4520]: I0130 08:36:59.548982 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fgwqk"] Jan 30 08:36:59 crc kubenswrapper[4520]: E0130 08:36:59.553213 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1d017b2-ee9b-43c5-ba54-98b2ef102009" containerName="collect-profiles" Jan 30 08:36:59 crc kubenswrapper[4520]: I0130 08:36:59.553250 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1d017b2-ee9b-43c5-ba54-98b2ef102009" containerName="collect-profiles" Jan 30 08:36:59 crc kubenswrapper[4520]: I0130 08:36:59.554963 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1d017b2-ee9b-43c5-ba54-98b2ef102009" containerName="collect-profiles" Jan 30 08:36:59 crc kubenswrapper[4520]: I0130 08:36:59.557050 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fgwqk" Jan 30 08:36:59 crc kubenswrapper[4520]: I0130 08:36:59.616886 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fgwqk"] Jan 30 08:36:59 crc kubenswrapper[4520]: I0130 08:36:59.749709 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05-utilities\") pod \"community-operators-fgwqk\" (UID: \"4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05\") " pod="openshift-marketplace/community-operators-fgwqk" Jan 30 08:36:59 crc kubenswrapper[4520]: I0130 08:36:59.749853 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4928r\" (UniqueName: \"kubernetes.io/projected/4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05-kube-api-access-4928r\") pod \"community-operators-fgwqk\" (UID: \"4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05\") " pod="openshift-marketplace/community-operators-fgwqk" Jan 30 08:36:59 crc kubenswrapper[4520]: I0130 08:36:59.749933 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05-catalog-content\") pod \"community-operators-fgwqk\" (UID: \"4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05\") " pod="openshift-marketplace/community-operators-fgwqk" Jan 30 08:36:59 crc kubenswrapper[4520]: I0130 08:36:59.851699 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05-utilities\") pod \"community-operators-fgwqk\" (UID: \"4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05\") " pod="openshift-marketplace/community-operators-fgwqk" Jan 30 08:36:59 crc kubenswrapper[4520]: I0130 08:36:59.851744 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4928r\" (UniqueName: \"kubernetes.io/projected/4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05-kube-api-access-4928r\") pod \"community-operators-fgwqk\" (UID: \"4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05\") " pod="openshift-marketplace/community-operators-fgwqk" Jan 30 08:36:59 crc kubenswrapper[4520]: I0130 08:36:59.851770 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05-catalog-content\") pod \"community-operators-fgwqk\" (UID: \"4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05\") " pod="openshift-marketplace/community-operators-fgwqk" Jan 30 08:36:59 crc kubenswrapper[4520]: I0130 08:36:59.853086 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05-catalog-content\") pod \"community-operators-fgwqk\" (UID: \"4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05\") " pod="openshift-marketplace/community-operators-fgwqk" Jan 30 08:36:59 crc kubenswrapper[4520]: I0130 08:36:59.853128 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05-utilities\") pod \"community-operators-fgwqk\" (UID: \"4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05\") " pod="openshift-marketplace/community-operators-fgwqk" Jan 30 08:36:59 crc kubenswrapper[4520]: I0130 08:36:59.887657 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4928r\" (UniqueName: \"kubernetes.io/projected/4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05-kube-api-access-4928r\") pod \"community-operators-fgwqk\" (UID: \"4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05\") " pod="openshift-marketplace/community-operators-fgwqk" Jan 30 08:37:00 crc kubenswrapper[4520]: I0130 08:37:00.176191 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fgwqk" Jan 30 08:37:01 crc kubenswrapper[4520]: I0130 08:37:01.019612 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fgwqk"] Jan 30 08:37:01 crc kubenswrapper[4520]: I0130 08:37:01.145192 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fgwqk" event={"ID":"4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05","Type":"ContainerStarted","Data":"d2c7bbb42f6f91f3fe71625a6a4f595ac6544977eeda4d3ef45815b06a3ae8ca"} Jan 30 08:37:01 crc kubenswrapper[4520]: I0130 08:37:01.739278 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-74kdf"] Jan 30 08:37:01 crc kubenswrapper[4520]: I0130 08:37:01.745964 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-74kdf" Jan 30 08:37:01 crc kubenswrapper[4520]: I0130 08:37:01.752386 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-74kdf"] Jan 30 08:37:01 crc kubenswrapper[4520]: I0130 08:37:01.808928 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60c5c847-2b66-4899-80ac-edd79a131074-catalog-content\") pod \"redhat-operators-74kdf\" (UID: \"60c5c847-2b66-4899-80ac-edd79a131074\") " pod="openshift-marketplace/redhat-operators-74kdf" Jan 30 08:37:01 crc kubenswrapper[4520]: I0130 08:37:01.808974 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60c5c847-2b66-4899-80ac-edd79a131074-utilities\") pod \"redhat-operators-74kdf\" (UID: \"60c5c847-2b66-4899-80ac-edd79a131074\") " pod="openshift-marketplace/redhat-operators-74kdf" Jan 30 08:37:01 crc kubenswrapper[4520]: I0130 08:37:01.809329 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nctp4\" (UniqueName: \"kubernetes.io/projected/60c5c847-2b66-4899-80ac-edd79a131074-kube-api-access-nctp4\") pod \"redhat-operators-74kdf\" (UID: \"60c5c847-2b66-4899-80ac-edd79a131074\") " pod="openshift-marketplace/redhat-operators-74kdf" Jan 30 08:37:01 crc kubenswrapper[4520]: I0130 08:37:01.910628 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nctp4\" (UniqueName: \"kubernetes.io/projected/60c5c847-2b66-4899-80ac-edd79a131074-kube-api-access-nctp4\") pod \"redhat-operators-74kdf\" (UID: \"60c5c847-2b66-4899-80ac-edd79a131074\") " pod="openshift-marketplace/redhat-operators-74kdf" Jan 30 08:37:01 crc kubenswrapper[4520]: I0130 08:37:01.910697 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60c5c847-2b66-4899-80ac-edd79a131074-catalog-content\") pod \"redhat-operators-74kdf\" (UID: \"60c5c847-2b66-4899-80ac-edd79a131074\") " pod="openshift-marketplace/redhat-operators-74kdf" Jan 30 08:37:01 crc kubenswrapper[4520]: I0130 08:37:01.910718 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60c5c847-2b66-4899-80ac-edd79a131074-utilities\") pod \"redhat-operators-74kdf\" (UID: \"60c5c847-2b66-4899-80ac-edd79a131074\") " pod="openshift-marketplace/redhat-operators-74kdf" Jan 30 08:37:01 crc kubenswrapper[4520]: I0130 08:37:01.911675 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60c5c847-2b66-4899-80ac-edd79a131074-utilities\") pod \"redhat-operators-74kdf\" (UID: \"60c5c847-2b66-4899-80ac-edd79a131074\") " pod="openshift-marketplace/redhat-operators-74kdf" Jan 30 08:37:01 crc kubenswrapper[4520]: I0130 08:37:01.911737 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60c5c847-2b66-4899-80ac-edd79a131074-catalog-content\") pod \"redhat-operators-74kdf\" (UID: \"60c5c847-2b66-4899-80ac-edd79a131074\") " pod="openshift-marketplace/redhat-operators-74kdf" Jan 30 08:37:01 crc kubenswrapper[4520]: I0130 08:37:01.931251 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nctp4\" (UniqueName: \"kubernetes.io/projected/60c5c847-2b66-4899-80ac-edd79a131074-kube-api-access-nctp4\") pod \"redhat-operators-74kdf\" (UID: \"60c5c847-2b66-4899-80ac-edd79a131074\") " pod="openshift-marketplace/redhat-operators-74kdf" Jan 30 08:37:02 crc kubenswrapper[4520]: I0130 08:37:02.061721 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-74kdf" Jan 30 08:37:02 crc kubenswrapper[4520]: I0130 08:37:02.158896 4520 generic.go:334] "Generic (PLEG): container finished" podID="4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05" containerID="9269bca29e35e1941aa62c4423e827ffc387fc85ec0a8ea7d4df531ff0ab775e" exitCode=0 Jan 30 08:37:02 crc kubenswrapper[4520]: I0130 08:37:02.158940 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fgwqk" event={"ID":"4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05","Type":"ContainerDied","Data":"9269bca29e35e1941aa62c4423e827ffc387fc85ec0a8ea7d4df531ff0ab775e"} Jan 30 08:37:02 crc kubenswrapper[4520]: I0130 08:37:02.173667 4520 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 08:37:02 crc kubenswrapper[4520]: I0130 08:37:02.594624 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-74kdf"] Jan 30 08:37:02 crc kubenswrapper[4520]: I0130 08:37:02.738876 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f4gf4"] Jan 30 08:37:02 crc kubenswrapper[4520]: I0130 08:37:02.741257 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f4gf4" Jan 30 08:37:02 crc kubenswrapper[4520]: I0130 08:37:02.751904 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f4gf4"] Jan 30 08:37:02 crc kubenswrapper[4520]: I0130 08:37:02.828947 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73ed3152-8076-44b1-9fc3-55aabe5c592c-utilities\") pod \"certified-operators-f4gf4\" (UID: \"73ed3152-8076-44b1-9fc3-55aabe5c592c\") " pod="openshift-marketplace/certified-operators-f4gf4" Jan 30 08:37:02 crc kubenswrapper[4520]: I0130 08:37:02.829031 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73ed3152-8076-44b1-9fc3-55aabe5c592c-catalog-content\") pod \"certified-operators-f4gf4\" (UID: \"73ed3152-8076-44b1-9fc3-55aabe5c592c\") " pod="openshift-marketplace/certified-operators-f4gf4" Jan 30 08:37:02 crc kubenswrapper[4520]: I0130 08:37:02.829073 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxxd2\" (UniqueName: \"kubernetes.io/projected/73ed3152-8076-44b1-9fc3-55aabe5c592c-kube-api-access-jxxd2\") pod \"certified-operators-f4gf4\" (UID: \"73ed3152-8076-44b1-9fc3-55aabe5c592c\") " pod="openshift-marketplace/certified-operators-f4gf4" Jan 30 08:37:02 crc kubenswrapper[4520]: I0130 08:37:02.931548 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73ed3152-8076-44b1-9fc3-55aabe5c592c-utilities\") pod \"certified-operators-f4gf4\" (UID: \"73ed3152-8076-44b1-9fc3-55aabe5c592c\") " pod="openshift-marketplace/certified-operators-f4gf4" Jan 30 08:37:02 crc kubenswrapper[4520]: I0130 08:37:02.931686 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73ed3152-8076-44b1-9fc3-55aabe5c592c-catalog-content\") pod \"certified-operators-f4gf4\" (UID: \"73ed3152-8076-44b1-9fc3-55aabe5c592c\") " pod="openshift-marketplace/certified-operators-f4gf4" Jan 30 08:37:02 crc kubenswrapper[4520]: I0130 08:37:02.931759 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxxd2\" (UniqueName: \"kubernetes.io/projected/73ed3152-8076-44b1-9fc3-55aabe5c592c-kube-api-access-jxxd2\") pod \"certified-operators-f4gf4\" (UID: \"73ed3152-8076-44b1-9fc3-55aabe5c592c\") " pod="openshift-marketplace/certified-operators-f4gf4" Jan 30 08:37:02 crc kubenswrapper[4520]: I0130 08:37:02.932664 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73ed3152-8076-44b1-9fc3-55aabe5c592c-utilities\") pod \"certified-operators-f4gf4\" (UID: \"73ed3152-8076-44b1-9fc3-55aabe5c592c\") " pod="openshift-marketplace/certified-operators-f4gf4" Jan 30 08:37:02 crc kubenswrapper[4520]: I0130 08:37:02.933025 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73ed3152-8076-44b1-9fc3-55aabe5c592c-catalog-content\") pod \"certified-operators-f4gf4\" (UID: \"73ed3152-8076-44b1-9fc3-55aabe5c592c\") " pod="openshift-marketplace/certified-operators-f4gf4" Jan 30 08:37:02 crc kubenswrapper[4520]: I0130 08:37:02.973143 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxxd2\" (UniqueName: \"kubernetes.io/projected/73ed3152-8076-44b1-9fc3-55aabe5c592c-kube-api-access-jxxd2\") pod \"certified-operators-f4gf4\" (UID: \"73ed3152-8076-44b1-9fc3-55aabe5c592c\") " pod="openshift-marketplace/certified-operators-f4gf4" Jan 30 08:37:03 crc kubenswrapper[4520]: I0130 08:37:03.074836 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f4gf4" Jan 30 08:37:03 crc kubenswrapper[4520]: I0130 08:37:03.171641 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fgwqk" event={"ID":"4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05","Type":"ContainerStarted","Data":"b8c8bc710db529d4ba48c66eb5f73a1160787310d39751025575d550c9b5589e"} Jan 30 08:37:03 crc kubenswrapper[4520]: I0130 08:37:03.197187 4520 generic.go:334] "Generic (PLEG): container finished" podID="60c5c847-2b66-4899-80ac-edd79a131074" containerID="e1d2cc051b39e1d96208c127bb84b9be43a2a251d116daa86557a5a5bb20538f" exitCode=0 Jan 30 08:37:03 crc kubenswrapper[4520]: I0130 08:37:03.197228 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74kdf" event={"ID":"60c5c847-2b66-4899-80ac-edd79a131074","Type":"ContainerDied","Data":"e1d2cc051b39e1d96208c127bb84b9be43a2a251d116daa86557a5a5bb20538f"} Jan 30 08:37:03 crc kubenswrapper[4520]: I0130 08:37:03.197249 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74kdf" event={"ID":"60c5c847-2b66-4899-80ac-edd79a131074","Type":"ContainerStarted","Data":"5c5c2b78b4a62529200de4c794f615d412346c09da9cebde01d0757b11eee3bf"} Jan 30 08:37:03 crc kubenswrapper[4520]: I0130 08:37:03.588096 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f4gf4"] Jan 30 08:37:04 crc kubenswrapper[4520]: I0130 08:37:04.209963 4520 generic.go:334] "Generic (PLEG): container finished" podID="73ed3152-8076-44b1-9fc3-55aabe5c592c" containerID="392a475e45cdfebada76ed8ee588250689b468b73819d7f4444da7f6393d68a2" exitCode=0 Jan 30 08:37:04 crc kubenswrapper[4520]: I0130 08:37:04.210351 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4gf4" event={"ID":"73ed3152-8076-44b1-9fc3-55aabe5c592c","Type":"ContainerDied","Data":"392a475e45cdfebada76ed8ee588250689b468b73819d7f4444da7f6393d68a2"} Jan 30 08:37:04 crc kubenswrapper[4520]: I0130 08:37:04.210381 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4gf4" event={"ID":"73ed3152-8076-44b1-9fc3-55aabe5c592c","Type":"ContainerStarted","Data":"7476eba7fb134fbce5d5f6804d66a521645df0cad4f0e759f1966e9991a8e061"} Jan 30 08:37:04 crc kubenswrapper[4520]: I0130 08:37:04.217124 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74kdf" event={"ID":"60c5c847-2b66-4899-80ac-edd79a131074","Type":"ContainerStarted","Data":"1070ddfa849376180f692347b97007260c39e8ff78a1ee1a052fc4e50807249d"} Jan 30 08:37:05 crc kubenswrapper[4520]: I0130 08:37:05.224804 4520 generic.go:334] "Generic (PLEG): container finished" podID="4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05" containerID="b8c8bc710db529d4ba48c66eb5f73a1160787310d39751025575d550c9b5589e" exitCode=0 Jan 30 08:37:05 crc kubenswrapper[4520]: I0130 08:37:05.224887 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fgwqk" event={"ID":"4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05","Type":"ContainerDied","Data":"b8c8bc710db529d4ba48c66eb5f73a1160787310d39751025575d550c9b5589e"} Jan 30 08:37:05 crc kubenswrapper[4520]: I0130 08:37:05.228688 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4gf4" event={"ID":"73ed3152-8076-44b1-9fc3-55aabe5c592c","Type":"ContainerStarted","Data":"d02feb2fb3fe5633452694caea9e8aefb3a1c128340d168fbac3616d2e7d5057"} Jan 30 08:37:06 crc kubenswrapper[4520]: I0130 08:37:06.289338 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fgwqk" event={"ID":"4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05","Type":"ContainerStarted","Data":"4f12699d151276d9c145aa1537f23240c35a699632e1afca98924b5635b9b2ce"} Jan 30 08:37:06 crc kubenswrapper[4520]: I0130 08:37:06.344378 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fgwqk" podStartSLOduration=3.764422711 podStartE2EDuration="7.341999413s" podCreationTimestamp="2026-01-30 08:36:59 +0000 UTC" firstStartedPulling="2026-01-30 08:37:02.163831723 +0000 UTC m=+6735.792183895" lastFinishedPulling="2026-01-30 08:37:05.741408415 +0000 UTC m=+6739.369760597" observedRunningTime="2026-01-30 08:37:06.3338022 +0000 UTC m=+6739.962154381" watchObservedRunningTime="2026-01-30 08:37:06.341999413 +0000 UTC m=+6739.970351593" Jan 30 08:37:07 crc kubenswrapper[4520]: I0130 08:37:07.304439 4520 generic.go:334] "Generic (PLEG): container finished" podID="73ed3152-8076-44b1-9fc3-55aabe5c592c" containerID="d02feb2fb3fe5633452694caea9e8aefb3a1c128340d168fbac3616d2e7d5057" exitCode=0 Jan 30 08:37:07 crc kubenswrapper[4520]: I0130 08:37:07.304566 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4gf4" event={"ID":"73ed3152-8076-44b1-9fc3-55aabe5c592c","Type":"ContainerDied","Data":"d02feb2fb3fe5633452694caea9e8aefb3a1c128340d168fbac3616d2e7d5057"} Jan 30 08:37:08 crc kubenswrapper[4520]: I0130 08:37:08.318264 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4gf4" event={"ID":"73ed3152-8076-44b1-9fc3-55aabe5c592c","Type":"ContainerStarted","Data":"28a2f43997dc34b93cff1cfff8ee67670f9a23f3f78de4896f1ff5ce5d784a2f"} Jan 30 08:37:08 crc kubenswrapper[4520]: I0130 08:37:08.321389 4520 generic.go:334] "Generic (PLEG): container finished" podID="60c5c847-2b66-4899-80ac-edd79a131074" containerID="1070ddfa849376180f692347b97007260c39e8ff78a1ee1a052fc4e50807249d" exitCode=0 Jan 30 08:37:08 crc kubenswrapper[4520]: I0130 08:37:08.321462 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74kdf" event={"ID":"60c5c847-2b66-4899-80ac-edd79a131074","Type":"ContainerDied","Data":"1070ddfa849376180f692347b97007260c39e8ff78a1ee1a052fc4e50807249d"} Jan 30 08:37:08 crc kubenswrapper[4520]: I0130 08:37:08.347082 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f4gf4" podStartSLOduration=2.669289736 podStartE2EDuration="6.347060903s" podCreationTimestamp="2026-01-30 08:37:02 +0000 UTC" firstStartedPulling="2026-01-30 08:37:04.211956622 +0000 UTC m=+6737.840308794" lastFinishedPulling="2026-01-30 08:37:07.88972778 +0000 UTC m=+6741.518079961" observedRunningTime="2026-01-30 08:37:08.336032579 +0000 UTC m=+6741.964384759" watchObservedRunningTime="2026-01-30 08:37:08.347060903 +0000 UTC m=+6741.975413075" Jan 30 08:37:09 crc kubenswrapper[4520]: I0130 08:37:09.336220 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74kdf" event={"ID":"60c5c847-2b66-4899-80ac-edd79a131074","Type":"ContainerStarted","Data":"33940c72d3d79dd8b1098927651740a27a4662c033ab981ca431ea83b5af0a8c"} Jan 30 08:37:09 crc kubenswrapper[4520]: I0130 08:37:09.356580 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-74kdf" podStartSLOduration=2.6120428799999997 podStartE2EDuration="8.356557361s" podCreationTimestamp="2026-01-30 08:37:01 +0000 UTC" firstStartedPulling="2026-01-30 08:37:03.203677233 +0000 UTC m=+6736.832029415" lastFinishedPulling="2026-01-30 08:37:08.948191716 +0000 UTC m=+6742.576543896" observedRunningTime="2026-01-30 08:37:09.352228812 +0000 UTC m=+6742.980580993" watchObservedRunningTime="2026-01-30 08:37:09.356557361 +0000 UTC m=+6742.984909541" Jan 30 08:37:10 crc kubenswrapper[4520]: I0130 08:37:10.177006 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fgwqk" Jan 30 08:37:10 crc kubenswrapper[4520]: I0130 08:37:10.177430 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fgwqk" Jan 30 08:37:10 crc kubenswrapper[4520]: I0130 08:37:10.686916 4520 scope.go:117] "RemoveContainer" containerID="6f73c66e63b3513012bf2229d38f1a3e0abca4bb0f8764b9d9fe057834d99863" Jan 30 08:37:10 crc kubenswrapper[4520]: E0130 08:37:10.687815 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:37:11 crc kubenswrapper[4520]: I0130 08:37:11.253605 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-fgwqk" podUID="4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05" containerName="registry-server" probeResult="failure" output=< Jan 30 08:37:11 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 08:37:11 crc kubenswrapper[4520]: > Jan 30 08:37:12 crc kubenswrapper[4520]: I0130 08:37:12.062390 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-74kdf" Jan 30 08:37:12 crc kubenswrapper[4520]: I0130 08:37:12.062859 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-74kdf" Jan 30 08:37:13 crc kubenswrapper[4520]: I0130 08:37:13.075491 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f4gf4" Jan 30 08:37:13 crc kubenswrapper[4520]: I0130 08:37:13.075814 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f4gf4" Jan 30 08:37:13 crc kubenswrapper[4520]: I0130 08:37:13.098142 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-74kdf" podUID="60c5c847-2b66-4899-80ac-edd79a131074" containerName="registry-server" probeResult="failure" output=< Jan 30 08:37:13 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 08:37:13 crc kubenswrapper[4520]: > Jan 30 08:37:14 crc kubenswrapper[4520]: I0130 08:37:14.120798 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-f4gf4" podUID="73ed3152-8076-44b1-9fc3-55aabe5c592c" containerName="registry-server" probeResult="failure" output=< Jan 30 08:37:14 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 08:37:14 crc kubenswrapper[4520]: > Jan 30 08:37:20 crc kubenswrapper[4520]: I0130 08:37:20.236679 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fgwqk" Jan 30 08:37:20 crc kubenswrapper[4520]: I0130 08:37:20.279326 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fgwqk" Jan 30 08:37:22 crc kubenswrapper[4520]: I0130 08:37:22.686073 4520 scope.go:117] "RemoveContainer" containerID="6f73c66e63b3513012bf2229d38f1a3e0abca4bb0f8764b9d9fe057834d99863" Jan 30 08:37:22 crc kubenswrapper[4520]: E0130 08:37:22.686726 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:37:23 crc kubenswrapper[4520]: I0130 08:37:23.106383 4520 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-74kdf" podUID="60c5c847-2b66-4899-80ac-edd79a131074" containerName="registry-server" probeResult="failure" output=< Jan 30 08:37:23 crc kubenswrapper[4520]: timeout: failed to connect service ":50051" within 1s Jan 30 08:37:23 crc kubenswrapper[4520]: > Jan 30 08:37:23 crc kubenswrapper[4520]: I0130 08:37:23.117637 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f4gf4" Jan 30 08:37:23 crc kubenswrapper[4520]: I0130 08:37:23.158246 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f4gf4" Jan 30 08:37:23 crc kubenswrapper[4520]: I0130 08:37:23.530645 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fgwqk"] Jan 30 08:37:23 crc kubenswrapper[4520]: I0130 08:37:23.533180 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fgwqk" podUID="4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05" containerName="registry-server" containerID="cri-o://4f12699d151276d9c145aa1537f23240c35a699632e1afca98924b5635b9b2ce" gracePeriod=2 Jan 30 08:37:24 crc kubenswrapper[4520]: I0130 08:37:24.323114 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fgwqk" Jan 30 08:37:24 crc kubenswrapper[4520]: I0130 08:37:24.358850 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05-utilities\") pod \"4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05\" (UID: \"4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05\") " Jan 30 08:37:24 crc kubenswrapper[4520]: I0130 08:37:24.358945 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05-catalog-content\") pod \"4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05\" (UID: \"4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05\") " Jan 30 08:37:24 crc kubenswrapper[4520]: I0130 08:37:24.359205 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4928r\" (UniqueName: \"kubernetes.io/projected/4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05-kube-api-access-4928r\") pod \"4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05\" (UID: \"4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05\") " Jan 30 08:37:24 crc kubenswrapper[4520]: I0130 08:37:24.361626 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05-utilities" (OuterVolumeSpecName: "utilities") pod "4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05" (UID: "4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:37:24 crc kubenswrapper[4520]: I0130 08:37:24.377379 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05-kube-api-access-4928r" (OuterVolumeSpecName: "kube-api-access-4928r") pod "4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05" (UID: "4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05"). InnerVolumeSpecName "kube-api-access-4928r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:37:24 crc kubenswrapper[4520]: I0130 08:37:24.412476 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05" (UID: "4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:37:24 crc kubenswrapper[4520]: I0130 08:37:24.462873 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4928r\" (UniqueName: \"kubernetes.io/projected/4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05-kube-api-access-4928r\") on node \"crc\" DevicePath \"\"" Jan 30 08:37:24 crc kubenswrapper[4520]: I0130 08:37:24.462911 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:37:24 crc kubenswrapper[4520]: I0130 08:37:24.462925 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:37:24 crc kubenswrapper[4520]: I0130 08:37:24.504577 4520 generic.go:334] "Generic (PLEG): container finished" podID="4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05" containerID="4f12699d151276d9c145aa1537f23240c35a699632e1afca98924b5635b9b2ce" exitCode=0 Jan 30 08:37:24 crc kubenswrapper[4520]: I0130 08:37:24.504643 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fgwqk" event={"ID":"4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05","Type":"ContainerDied","Data":"4f12699d151276d9c145aa1537f23240c35a699632e1afca98924b5635b9b2ce"} Jan 30 08:37:24 crc kubenswrapper[4520]: I0130 08:37:24.504657 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fgwqk" Jan 30 08:37:24 crc kubenswrapper[4520]: I0130 08:37:24.504692 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fgwqk" event={"ID":"4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05","Type":"ContainerDied","Data":"d2c7bbb42f6f91f3fe71625a6a4f595ac6544977eeda4d3ef45815b06a3ae8ca"} Jan 30 08:37:24 crc kubenswrapper[4520]: I0130 08:37:24.504714 4520 scope.go:117] "RemoveContainer" containerID="4f12699d151276d9c145aa1537f23240c35a699632e1afca98924b5635b9b2ce" Jan 30 08:37:24 crc kubenswrapper[4520]: I0130 08:37:24.542790 4520 scope.go:117] "RemoveContainer" containerID="b8c8bc710db529d4ba48c66eb5f73a1160787310d39751025575d550c9b5589e" Jan 30 08:37:24 crc kubenswrapper[4520]: I0130 08:37:24.560590 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fgwqk"] Jan 30 08:37:24 crc kubenswrapper[4520]: I0130 08:37:24.574593 4520 scope.go:117] "RemoveContainer" containerID="9269bca29e35e1941aa62c4423e827ffc387fc85ec0a8ea7d4df531ff0ab775e" Jan 30 08:37:24 crc kubenswrapper[4520]: I0130 08:37:24.574838 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fgwqk"] Jan 30 08:37:24 crc kubenswrapper[4520]: I0130 08:37:24.617351 4520 scope.go:117] "RemoveContainer" containerID="4f12699d151276d9c145aa1537f23240c35a699632e1afca98924b5635b9b2ce" Jan 30 08:37:24 crc kubenswrapper[4520]: E0130 08:37:24.618722 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f12699d151276d9c145aa1537f23240c35a699632e1afca98924b5635b9b2ce\": container with ID starting with 4f12699d151276d9c145aa1537f23240c35a699632e1afca98924b5635b9b2ce not found: ID does not exist" containerID="4f12699d151276d9c145aa1537f23240c35a699632e1afca98924b5635b9b2ce" Jan 30 08:37:24 crc kubenswrapper[4520]: I0130 08:37:24.619271 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f12699d151276d9c145aa1537f23240c35a699632e1afca98924b5635b9b2ce"} err="failed to get container status \"4f12699d151276d9c145aa1537f23240c35a699632e1afca98924b5635b9b2ce\": rpc error: code = NotFound desc = could not find container \"4f12699d151276d9c145aa1537f23240c35a699632e1afca98924b5635b9b2ce\": container with ID starting with 4f12699d151276d9c145aa1537f23240c35a699632e1afca98924b5635b9b2ce not found: ID does not exist" Jan 30 08:37:24 crc kubenswrapper[4520]: I0130 08:37:24.619305 4520 scope.go:117] "RemoveContainer" containerID="b8c8bc710db529d4ba48c66eb5f73a1160787310d39751025575d550c9b5589e" Jan 30 08:37:24 crc kubenswrapper[4520]: E0130 08:37:24.619609 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8c8bc710db529d4ba48c66eb5f73a1160787310d39751025575d550c9b5589e\": container with ID starting with b8c8bc710db529d4ba48c66eb5f73a1160787310d39751025575d550c9b5589e not found: ID does not exist" containerID="b8c8bc710db529d4ba48c66eb5f73a1160787310d39751025575d550c9b5589e" Jan 30 08:37:24 crc kubenswrapper[4520]: I0130 08:37:24.619633 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8c8bc710db529d4ba48c66eb5f73a1160787310d39751025575d550c9b5589e"} err="failed to get container status \"b8c8bc710db529d4ba48c66eb5f73a1160787310d39751025575d550c9b5589e\": rpc error: code = NotFound desc = could not find container \"b8c8bc710db529d4ba48c66eb5f73a1160787310d39751025575d550c9b5589e\": container with ID starting with b8c8bc710db529d4ba48c66eb5f73a1160787310d39751025575d550c9b5589e not found: ID does not exist" Jan 30 08:37:24 crc kubenswrapper[4520]: I0130 08:37:24.619673 4520 scope.go:117] "RemoveContainer" containerID="9269bca29e35e1941aa62c4423e827ffc387fc85ec0a8ea7d4df531ff0ab775e" Jan 30 08:37:24 crc kubenswrapper[4520]: E0130 08:37:24.619900 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9269bca29e35e1941aa62c4423e827ffc387fc85ec0a8ea7d4df531ff0ab775e\": container with ID starting with 9269bca29e35e1941aa62c4423e827ffc387fc85ec0a8ea7d4df531ff0ab775e not found: ID does not exist" containerID="9269bca29e35e1941aa62c4423e827ffc387fc85ec0a8ea7d4df531ff0ab775e" Jan 30 08:37:24 crc kubenswrapper[4520]: I0130 08:37:24.619926 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9269bca29e35e1941aa62c4423e827ffc387fc85ec0a8ea7d4df531ff0ab775e"} err="failed to get container status \"9269bca29e35e1941aa62c4423e827ffc387fc85ec0a8ea7d4df531ff0ab775e\": rpc error: code = NotFound desc = could not find container \"9269bca29e35e1941aa62c4423e827ffc387fc85ec0a8ea7d4df531ff0ab775e\": container with ID starting with 9269bca29e35e1941aa62c4423e827ffc387fc85ec0a8ea7d4df531ff0ab775e not found: ID does not exist" Jan 30 08:37:24 crc kubenswrapper[4520]: I0130 08:37:24.695825 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05" path="/var/lib/kubelet/pods/4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05/volumes" Jan 30 08:37:25 crc kubenswrapper[4520]: I0130 08:37:25.731006 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f4gf4"] Jan 30 08:37:25 crc kubenswrapper[4520]: I0130 08:37:25.731239 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f4gf4" podUID="73ed3152-8076-44b1-9fc3-55aabe5c592c" containerName="registry-server" containerID="cri-o://28a2f43997dc34b93cff1cfff8ee67670f9a23f3f78de4896f1ff5ce5d784a2f" gracePeriod=2 Jan 30 08:37:26 crc kubenswrapper[4520]: I0130 08:37:26.149982 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f4gf4" Jan 30 08:37:26 crc kubenswrapper[4520]: I0130 08:37:26.197147 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73ed3152-8076-44b1-9fc3-55aabe5c592c-catalog-content\") pod \"73ed3152-8076-44b1-9fc3-55aabe5c592c\" (UID: \"73ed3152-8076-44b1-9fc3-55aabe5c592c\") " Jan 30 08:37:26 crc kubenswrapper[4520]: I0130 08:37:26.197310 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxxd2\" (UniqueName: \"kubernetes.io/projected/73ed3152-8076-44b1-9fc3-55aabe5c592c-kube-api-access-jxxd2\") pod \"73ed3152-8076-44b1-9fc3-55aabe5c592c\" (UID: \"73ed3152-8076-44b1-9fc3-55aabe5c592c\") " Jan 30 08:37:26 crc kubenswrapper[4520]: I0130 08:37:26.197604 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73ed3152-8076-44b1-9fc3-55aabe5c592c-utilities\") pod \"73ed3152-8076-44b1-9fc3-55aabe5c592c\" (UID: \"73ed3152-8076-44b1-9fc3-55aabe5c592c\") " Jan 30 08:37:26 crc kubenswrapper[4520]: I0130 08:37:26.198390 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73ed3152-8076-44b1-9fc3-55aabe5c592c-utilities" (OuterVolumeSpecName: "utilities") pod "73ed3152-8076-44b1-9fc3-55aabe5c592c" (UID: "73ed3152-8076-44b1-9fc3-55aabe5c592c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:37:26 crc kubenswrapper[4520]: I0130 08:37:26.204665 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73ed3152-8076-44b1-9fc3-55aabe5c592c-kube-api-access-jxxd2" (OuterVolumeSpecName: "kube-api-access-jxxd2") pod "73ed3152-8076-44b1-9fc3-55aabe5c592c" (UID: "73ed3152-8076-44b1-9fc3-55aabe5c592c"). InnerVolumeSpecName "kube-api-access-jxxd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:37:26 crc kubenswrapper[4520]: I0130 08:37:26.253861 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73ed3152-8076-44b1-9fc3-55aabe5c592c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "73ed3152-8076-44b1-9fc3-55aabe5c592c" (UID: "73ed3152-8076-44b1-9fc3-55aabe5c592c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:37:26 crc kubenswrapper[4520]: I0130 08:37:26.301248 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73ed3152-8076-44b1-9fc3-55aabe5c592c-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:37:26 crc kubenswrapper[4520]: I0130 08:37:26.301274 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73ed3152-8076-44b1-9fc3-55aabe5c592c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:37:26 crc kubenswrapper[4520]: I0130 08:37:26.301306 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxxd2\" (UniqueName: \"kubernetes.io/projected/73ed3152-8076-44b1-9fc3-55aabe5c592c-kube-api-access-jxxd2\") on node \"crc\" DevicePath \"\"" Jan 30 08:37:26 crc kubenswrapper[4520]: I0130 08:37:26.523310 4520 generic.go:334] "Generic (PLEG): container finished" podID="73ed3152-8076-44b1-9fc3-55aabe5c592c" containerID="28a2f43997dc34b93cff1cfff8ee67670f9a23f3f78de4896f1ff5ce5d784a2f" exitCode=0 Jan 30 08:37:26 crc kubenswrapper[4520]: I0130 08:37:26.523372 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f4gf4" Jan 30 08:37:26 crc kubenswrapper[4520]: I0130 08:37:26.523375 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4gf4" event={"ID":"73ed3152-8076-44b1-9fc3-55aabe5c592c","Type":"ContainerDied","Data":"28a2f43997dc34b93cff1cfff8ee67670f9a23f3f78de4896f1ff5ce5d784a2f"} Jan 30 08:37:26 crc kubenswrapper[4520]: I0130 08:37:26.523493 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4gf4" event={"ID":"73ed3152-8076-44b1-9fc3-55aabe5c592c","Type":"ContainerDied","Data":"7476eba7fb134fbce5d5f6804d66a521645df0cad4f0e759f1966e9991a8e061"} Jan 30 08:37:26 crc kubenswrapper[4520]: I0130 08:37:26.523529 4520 scope.go:117] "RemoveContainer" containerID="28a2f43997dc34b93cff1cfff8ee67670f9a23f3f78de4896f1ff5ce5d784a2f" Jan 30 08:37:26 crc kubenswrapper[4520]: I0130 08:37:26.550680 4520 scope.go:117] "RemoveContainer" containerID="d02feb2fb3fe5633452694caea9e8aefb3a1c128340d168fbac3616d2e7d5057" Jan 30 08:37:26 crc kubenswrapper[4520]: I0130 08:37:26.568576 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f4gf4"] Jan 30 08:37:26 crc kubenswrapper[4520]: I0130 08:37:26.580597 4520 scope.go:117] "RemoveContainer" containerID="392a475e45cdfebada76ed8ee588250689b468b73819d7f4444da7f6393d68a2" Jan 30 08:37:26 crc kubenswrapper[4520]: I0130 08:37:26.587527 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f4gf4"] Jan 30 08:37:26 crc kubenswrapper[4520]: I0130 08:37:26.611375 4520 scope.go:117] "RemoveContainer" containerID="28a2f43997dc34b93cff1cfff8ee67670f9a23f3f78de4896f1ff5ce5d784a2f" Jan 30 08:37:26 crc kubenswrapper[4520]: E0130 08:37:26.611893 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28a2f43997dc34b93cff1cfff8ee67670f9a23f3f78de4896f1ff5ce5d784a2f\": container with ID starting with 28a2f43997dc34b93cff1cfff8ee67670f9a23f3f78de4896f1ff5ce5d784a2f not found: ID does not exist" containerID="28a2f43997dc34b93cff1cfff8ee67670f9a23f3f78de4896f1ff5ce5d784a2f" Jan 30 08:37:26 crc kubenswrapper[4520]: I0130 08:37:26.611942 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28a2f43997dc34b93cff1cfff8ee67670f9a23f3f78de4896f1ff5ce5d784a2f"} err="failed to get container status \"28a2f43997dc34b93cff1cfff8ee67670f9a23f3f78de4896f1ff5ce5d784a2f\": rpc error: code = NotFound desc = could not find container \"28a2f43997dc34b93cff1cfff8ee67670f9a23f3f78de4896f1ff5ce5d784a2f\": container with ID starting with 28a2f43997dc34b93cff1cfff8ee67670f9a23f3f78de4896f1ff5ce5d784a2f not found: ID does not exist" Jan 30 08:37:26 crc kubenswrapper[4520]: I0130 08:37:26.611969 4520 scope.go:117] "RemoveContainer" containerID="d02feb2fb3fe5633452694caea9e8aefb3a1c128340d168fbac3616d2e7d5057" Jan 30 08:37:26 crc kubenswrapper[4520]: E0130 08:37:26.612425 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d02feb2fb3fe5633452694caea9e8aefb3a1c128340d168fbac3616d2e7d5057\": container with ID starting with d02feb2fb3fe5633452694caea9e8aefb3a1c128340d168fbac3616d2e7d5057 not found: ID does not exist" containerID="d02feb2fb3fe5633452694caea9e8aefb3a1c128340d168fbac3616d2e7d5057" Jan 30 08:37:26 crc kubenswrapper[4520]: I0130 08:37:26.612467 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d02feb2fb3fe5633452694caea9e8aefb3a1c128340d168fbac3616d2e7d5057"} err="failed to get container status \"d02feb2fb3fe5633452694caea9e8aefb3a1c128340d168fbac3616d2e7d5057\": rpc error: code = NotFound desc = could not find container \"d02feb2fb3fe5633452694caea9e8aefb3a1c128340d168fbac3616d2e7d5057\": container with ID starting with d02feb2fb3fe5633452694caea9e8aefb3a1c128340d168fbac3616d2e7d5057 not found: ID does not exist" Jan 30 08:37:26 crc kubenswrapper[4520]: I0130 08:37:26.612490 4520 scope.go:117] "RemoveContainer" containerID="392a475e45cdfebada76ed8ee588250689b468b73819d7f4444da7f6393d68a2" Jan 30 08:37:26 crc kubenswrapper[4520]: E0130 08:37:26.612895 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"392a475e45cdfebada76ed8ee588250689b468b73819d7f4444da7f6393d68a2\": container with ID starting with 392a475e45cdfebada76ed8ee588250689b468b73819d7f4444da7f6393d68a2 not found: ID does not exist" containerID="392a475e45cdfebada76ed8ee588250689b468b73819d7f4444da7f6393d68a2" Jan 30 08:37:26 crc kubenswrapper[4520]: I0130 08:37:26.612914 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"392a475e45cdfebada76ed8ee588250689b468b73819d7f4444da7f6393d68a2"} err="failed to get container status \"392a475e45cdfebada76ed8ee588250689b468b73819d7f4444da7f6393d68a2\": rpc error: code = NotFound desc = could not find container \"392a475e45cdfebada76ed8ee588250689b468b73819d7f4444da7f6393d68a2\": container with ID starting with 392a475e45cdfebada76ed8ee588250689b468b73819d7f4444da7f6393d68a2 not found: ID does not exist" Jan 30 08:37:26 crc kubenswrapper[4520]: I0130 08:37:26.720599 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73ed3152-8076-44b1-9fc3-55aabe5c592c" path="/var/lib/kubelet/pods/73ed3152-8076-44b1-9fc3-55aabe5c592c/volumes" Jan 30 08:37:32 crc kubenswrapper[4520]: I0130 08:37:32.098735 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-74kdf" Jan 30 08:37:32 crc kubenswrapper[4520]: I0130 08:37:32.137682 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-74kdf" Jan 30 08:37:33 crc kubenswrapper[4520]: I0130 08:37:33.144368 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-74kdf"] Jan 30 08:37:33 crc kubenswrapper[4520]: I0130 08:37:33.580746 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-74kdf" podUID="60c5c847-2b66-4899-80ac-edd79a131074" containerName="registry-server" containerID="cri-o://33940c72d3d79dd8b1098927651740a27a4662c033ab981ca431ea83b5af0a8c" gracePeriod=2 Jan 30 08:37:34 crc kubenswrapper[4520]: I0130 08:37:34.013467 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-74kdf" Jan 30 08:37:34 crc kubenswrapper[4520]: I0130 08:37:34.070951 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60c5c847-2b66-4899-80ac-edd79a131074-catalog-content\") pod \"60c5c847-2b66-4899-80ac-edd79a131074\" (UID: \"60c5c847-2b66-4899-80ac-edd79a131074\") " Jan 30 08:37:34 crc kubenswrapper[4520]: I0130 08:37:34.071010 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60c5c847-2b66-4899-80ac-edd79a131074-utilities\") pod \"60c5c847-2b66-4899-80ac-edd79a131074\" (UID: \"60c5c847-2b66-4899-80ac-edd79a131074\") " Jan 30 08:37:34 crc kubenswrapper[4520]: I0130 08:37:34.071327 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nctp4\" (UniqueName: \"kubernetes.io/projected/60c5c847-2b66-4899-80ac-edd79a131074-kube-api-access-nctp4\") pod \"60c5c847-2b66-4899-80ac-edd79a131074\" (UID: \"60c5c847-2b66-4899-80ac-edd79a131074\") " Jan 30 08:37:34 crc kubenswrapper[4520]: I0130 08:37:34.072139 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60c5c847-2b66-4899-80ac-edd79a131074-utilities" (OuterVolumeSpecName: "utilities") pod "60c5c847-2b66-4899-80ac-edd79a131074" (UID: "60c5c847-2b66-4899-80ac-edd79a131074"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:37:34 crc kubenswrapper[4520]: I0130 08:37:34.077994 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60c5c847-2b66-4899-80ac-edd79a131074-kube-api-access-nctp4" (OuterVolumeSpecName: "kube-api-access-nctp4") pod "60c5c847-2b66-4899-80ac-edd79a131074" (UID: "60c5c847-2b66-4899-80ac-edd79a131074"). InnerVolumeSpecName "kube-api-access-nctp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:37:34 crc kubenswrapper[4520]: I0130 08:37:34.175634 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60c5c847-2b66-4899-80ac-edd79a131074-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:37:34 crc kubenswrapper[4520]: I0130 08:37:34.175675 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nctp4\" (UniqueName: \"kubernetes.io/projected/60c5c847-2b66-4899-80ac-edd79a131074-kube-api-access-nctp4\") on node \"crc\" DevicePath \"\"" Jan 30 08:37:34 crc kubenswrapper[4520]: I0130 08:37:34.180995 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60c5c847-2b66-4899-80ac-edd79a131074-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "60c5c847-2b66-4899-80ac-edd79a131074" (UID: "60c5c847-2b66-4899-80ac-edd79a131074"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:37:34 crc kubenswrapper[4520]: I0130 08:37:34.277838 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60c5c847-2b66-4899-80ac-edd79a131074-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:37:34 crc kubenswrapper[4520]: I0130 08:37:34.591678 4520 generic.go:334] "Generic (PLEG): container finished" podID="60c5c847-2b66-4899-80ac-edd79a131074" containerID="33940c72d3d79dd8b1098927651740a27a4662c033ab981ca431ea83b5af0a8c" exitCode=0 Jan 30 08:37:34 crc kubenswrapper[4520]: I0130 08:37:34.591732 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-74kdf" Jan 30 08:37:34 crc kubenswrapper[4520]: I0130 08:37:34.591751 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74kdf" event={"ID":"60c5c847-2b66-4899-80ac-edd79a131074","Type":"ContainerDied","Data":"33940c72d3d79dd8b1098927651740a27a4662c033ab981ca431ea83b5af0a8c"} Jan 30 08:37:34 crc kubenswrapper[4520]: I0130 08:37:34.592187 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74kdf" event={"ID":"60c5c847-2b66-4899-80ac-edd79a131074","Type":"ContainerDied","Data":"5c5c2b78b4a62529200de4c794f615d412346c09da9cebde01d0757b11eee3bf"} Jan 30 08:37:34 crc kubenswrapper[4520]: I0130 08:37:34.592218 4520 scope.go:117] "RemoveContainer" containerID="33940c72d3d79dd8b1098927651740a27a4662c033ab981ca431ea83b5af0a8c" Jan 30 08:37:34 crc kubenswrapper[4520]: I0130 08:37:34.640266 4520 scope.go:117] "RemoveContainer" containerID="1070ddfa849376180f692347b97007260c39e8ff78a1ee1a052fc4e50807249d" Jan 30 08:37:34 crc kubenswrapper[4520]: I0130 08:37:34.684085 4520 scope.go:117] "RemoveContainer" containerID="e1d2cc051b39e1d96208c127bb84b9be43a2a251d116daa86557a5a5bb20538f" Jan 30 08:37:34 crc kubenswrapper[4520]: I0130 08:37:34.714941 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-74kdf"] Jan 30 08:37:34 crc kubenswrapper[4520]: I0130 08:37:34.755023 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-74kdf"] Jan 30 08:37:34 crc kubenswrapper[4520]: I0130 08:37:34.784294 4520 scope.go:117] "RemoveContainer" containerID="33940c72d3d79dd8b1098927651740a27a4662c033ab981ca431ea83b5af0a8c" Jan 30 08:37:34 crc kubenswrapper[4520]: E0130 08:37:34.791665 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33940c72d3d79dd8b1098927651740a27a4662c033ab981ca431ea83b5af0a8c\": container with ID starting with 33940c72d3d79dd8b1098927651740a27a4662c033ab981ca431ea83b5af0a8c not found: ID does not exist" containerID="33940c72d3d79dd8b1098927651740a27a4662c033ab981ca431ea83b5af0a8c" Jan 30 08:37:34 crc kubenswrapper[4520]: I0130 08:37:34.791723 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33940c72d3d79dd8b1098927651740a27a4662c033ab981ca431ea83b5af0a8c"} err="failed to get container status \"33940c72d3d79dd8b1098927651740a27a4662c033ab981ca431ea83b5af0a8c\": rpc error: code = NotFound desc = could not find container \"33940c72d3d79dd8b1098927651740a27a4662c033ab981ca431ea83b5af0a8c\": container with ID starting with 33940c72d3d79dd8b1098927651740a27a4662c033ab981ca431ea83b5af0a8c not found: ID does not exist" Jan 30 08:37:34 crc kubenswrapper[4520]: I0130 08:37:34.791758 4520 scope.go:117] "RemoveContainer" containerID="1070ddfa849376180f692347b97007260c39e8ff78a1ee1a052fc4e50807249d" Jan 30 08:37:34 crc kubenswrapper[4520]: E0130 08:37:34.794200 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1070ddfa849376180f692347b97007260c39e8ff78a1ee1a052fc4e50807249d\": container with ID starting with 1070ddfa849376180f692347b97007260c39e8ff78a1ee1a052fc4e50807249d not found: ID does not exist" containerID="1070ddfa849376180f692347b97007260c39e8ff78a1ee1a052fc4e50807249d" Jan 30 08:37:34 crc kubenswrapper[4520]: I0130 08:37:34.794232 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1070ddfa849376180f692347b97007260c39e8ff78a1ee1a052fc4e50807249d"} err="failed to get container status \"1070ddfa849376180f692347b97007260c39e8ff78a1ee1a052fc4e50807249d\": rpc error: code = NotFound desc = could not find container \"1070ddfa849376180f692347b97007260c39e8ff78a1ee1a052fc4e50807249d\": container with ID starting with 1070ddfa849376180f692347b97007260c39e8ff78a1ee1a052fc4e50807249d not found: ID does not exist" Jan 30 08:37:34 crc kubenswrapper[4520]: I0130 08:37:34.794255 4520 scope.go:117] "RemoveContainer" containerID="e1d2cc051b39e1d96208c127bb84b9be43a2a251d116daa86557a5a5bb20538f" Jan 30 08:37:34 crc kubenswrapper[4520]: E0130 08:37:34.798101 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1d2cc051b39e1d96208c127bb84b9be43a2a251d116daa86557a5a5bb20538f\": container with ID starting with e1d2cc051b39e1d96208c127bb84b9be43a2a251d116daa86557a5a5bb20538f not found: ID does not exist" containerID="e1d2cc051b39e1d96208c127bb84b9be43a2a251d116daa86557a5a5bb20538f" Jan 30 08:37:34 crc kubenswrapper[4520]: I0130 08:37:34.798141 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1d2cc051b39e1d96208c127bb84b9be43a2a251d116daa86557a5a5bb20538f"} err="failed to get container status \"e1d2cc051b39e1d96208c127bb84b9be43a2a251d116daa86557a5a5bb20538f\": rpc error: code = NotFound desc = could not find container \"e1d2cc051b39e1d96208c127bb84b9be43a2a251d116daa86557a5a5bb20538f\": container with ID starting with e1d2cc051b39e1d96208c127bb84b9be43a2a251d116daa86557a5a5bb20538f not found: ID does not exist" Jan 30 08:37:36 crc kubenswrapper[4520]: I0130 08:37:36.689931 4520 scope.go:117] "RemoveContainer" containerID="6f73c66e63b3513012bf2229d38f1a3e0abca4bb0f8764b9d9fe057834d99863" Jan 30 08:37:36 crc kubenswrapper[4520]: E0130 08:37:36.691191 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:37:36 crc kubenswrapper[4520]: I0130 08:37:36.693075 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60c5c847-2b66-4899-80ac-edd79a131074" path="/var/lib/kubelet/pods/60c5c847-2b66-4899-80ac-edd79a131074/volumes" Jan 30 08:37:47 crc kubenswrapper[4520]: I0130 08:37:47.688552 4520 scope.go:117] "RemoveContainer" containerID="6f73c66e63b3513012bf2229d38f1a3e0abca4bb0f8764b9d9fe057834d99863" Jan 30 08:37:47 crc kubenswrapper[4520]: E0130 08:37:47.689228 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:38:02 crc kubenswrapper[4520]: I0130 08:38:02.686492 4520 scope.go:117] "RemoveContainer" containerID="6f73c66e63b3513012bf2229d38f1a3e0abca4bb0f8764b9d9fe057834d99863" Jan 30 08:38:02 crc kubenswrapper[4520]: E0130 08:38:02.687327 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:38:17 crc kubenswrapper[4520]: I0130 08:38:17.685894 4520 scope.go:117] "RemoveContainer" containerID="6f73c66e63b3513012bf2229d38f1a3e0abca4bb0f8764b9d9fe057834d99863" Jan 30 08:38:17 crc kubenswrapper[4520]: E0130 08:38:17.686827 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:38:30 crc kubenswrapper[4520]: I0130 08:38:30.686646 4520 scope.go:117] "RemoveContainer" containerID="6f73c66e63b3513012bf2229d38f1a3e0abca4bb0f8764b9d9fe057834d99863" Jan 30 08:38:30 crc kubenswrapper[4520]: E0130 08:38:30.687798 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:38:42 crc kubenswrapper[4520]: I0130 08:38:42.685469 4520 scope.go:117] "RemoveContainer" containerID="6f73c66e63b3513012bf2229d38f1a3e0abca4bb0f8764b9d9fe057834d99863" Jan 30 08:38:42 crc kubenswrapper[4520]: E0130 08:38:42.686178 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:38:55 crc kubenswrapper[4520]: I0130 08:38:55.686251 4520 scope.go:117] "RemoveContainer" containerID="6f73c66e63b3513012bf2229d38f1a3e0abca4bb0f8764b9d9fe057834d99863" Jan 30 08:38:55 crc kubenswrapper[4520]: E0130 08:38:55.687225 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:39:09 crc kubenswrapper[4520]: I0130 08:39:09.686224 4520 scope.go:117] "RemoveContainer" containerID="6f73c66e63b3513012bf2229d38f1a3e0abca4bb0f8764b9d9fe057834d99863" Jan 30 08:39:09 crc kubenswrapper[4520]: E0130 08:39:09.687026 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:39:21 crc kubenswrapper[4520]: I0130 08:39:21.685809 4520 scope.go:117] "RemoveContainer" containerID="6f73c66e63b3513012bf2229d38f1a3e0abca4bb0f8764b9d9fe057834d99863" Jan 30 08:39:21 crc kubenswrapper[4520]: E0130 08:39:21.686620 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:39:34 crc kubenswrapper[4520]: I0130 08:39:34.686855 4520 scope.go:117] "RemoveContainer" containerID="6f73c66e63b3513012bf2229d38f1a3e0abca4bb0f8764b9d9fe057834d99863" Jan 30 08:39:34 crc kubenswrapper[4520]: E0130 08:39:34.688921 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:39:44 crc kubenswrapper[4520]: I0130 08:39:44.492164 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-676r8"] Jan 30 08:39:44 crc kubenswrapper[4520]: E0130 08:39:44.494580 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73ed3152-8076-44b1-9fc3-55aabe5c592c" containerName="extract-utilities" Jan 30 08:39:44 crc kubenswrapper[4520]: I0130 08:39:44.494621 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="73ed3152-8076-44b1-9fc3-55aabe5c592c" containerName="extract-utilities" Jan 30 08:39:44 crc kubenswrapper[4520]: E0130 08:39:44.494655 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73ed3152-8076-44b1-9fc3-55aabe5c592c" containerName="extract-content" Jan 30 08:39:44 crc kubenswrapper[4520]: I0130 08:39:44.494664 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="73ed3152-8076-44b1-9fc3-55aabe5c592c" containerName="extract-content" Jan 30 08:39:44 crc kubenswrapper[4520]: E0130 08:39:44.494671 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60c5c847-2b66-4899-80ac-edd79a131074" containerName="registry-server" Jan 30 08:39:44 crc kubenswrapper[4520]: I0130 08:39:44.494679 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="60c5c847-2b66-4899-80ac-edd79a131074" containerName="registry-server" Jan 30 08:39:44 crc kubenswrapper[4520]: E0130 08:39:44.494689 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05" containerName="registry-server" Jan 30 08:39:44 crc kubenswrapper[4520]: I0130 08:39:44.494696 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05" containerName="registry-server" Jan 30 08:39:44 crc kubenswrapper[4520]: E0130 08:39:44.494714 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05" containerName="extract-utilities" Jan 30 08:39:44 crc kubenswrapper[4520]: I0130 08:39:44.494723 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05" containerName="extract-utilities" Jan 30 08:39:44 crc kubenswrapper[4520]: E0130 08:39:44.494752 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60c5c847-2b66-4899-80ac-edd79a131074" containerName="extract-content" Jan 30 08:39:44 crc kubenswrapper[4520]: I0130 08:39:44.494760 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="60c5c847-2b66-4899-80ac-edd79a131074" containerName="extract-content" Jan 30 08:39:44 crc kubenswrapper[4520]: E0130 08:39:44.494774 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05" containerName="extract-content" Jan 30 08:39:44 crc kubenswrapper[4520]: I0130 08:39:44.494783 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05" containerName="extract-content" Jan 30 08:39:44 crc kubenswrapper[4520]: E0130 08:39:44.494795 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73ed3152-8076-44b1-9fc3-55aabe5c592c" containerName="registry-server" Jan 30 08:39:44 crc kubenswrapper[4520]: I0130 08:39:44.494801 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="73ed3152-8076-44b1-9fc3-55aabe5c592c" containerName="registry-server" Jan 30 08:39:44 crc kubenswrapper[4520]: E0130 08:39:44.494813 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60c5c847-2b66-4899-80ac-edd79a131074" containerName="extract-utilities" Jan 30 08:39:44 crc kubenswrapper[4520]: I0130 08:39:44.494819 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="60c5c847-2b66-4899-80ac-edd79a131074" containerName="extract-utilities" Jan 30 08:39:44 crc kubenswrapper[4520]: I0130 08:39:44.495065 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c731dbc-4f0a-4a8b-b3b8-b8747b1e9d05" containerName="registry-server" Jan 30 08:39:44 crc kubenswrapper[4520]: I0130 08:39:44.495082 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="73ed3152-8076-44b1-9fc3-55aabe5c592c" containerName="registry-server" Jan 30 08:39:44 crc kubenswrapper[4520]: I0130 08:39:44.495095 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="60c5c847-2b66-4899-80ac-edd79a131074" containerName="registry-server" Jan 30 08:39:44 crc kubenswrapper[4520]: I0130 08:39:44.499350 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-676r8" Jan 30 08:39:44 crc kubenswrapper[4520]: I0130 08:39:44.502354 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-676r8"] Jan 30 08:39:44 crc kubenswrapper[4520]: I0130 08:39:44.554583 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/899eab19-093a-4328-8eac-20e8e218f094-catalog-content\") pod \"redhat-marketplace-676r8\" (UID: \"899eab19-093a-4328-8eac-20e8e218f094\") " pod="openshift-marketplace/redhat-marketplace-676r8" Jan 30 08:39:44 crc kubenswrapper[4520]: I0130 08:39:44.554744 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p68f\" (UniqueName: \"kubernetes.io/projected/899eab19-093a-4328-8eac-20e8e218f094-kube-api-access-8p68f\") pod \"redhat-marketplace-676r8\" (UID: \"899eab19-093a-4328-8eac-20e8e218f094\") " pod="openshift-marketplace/redhat-marketplace-676r8" Jan 30 08:39:44 crc kubenswrapper[4520]: I0130 08:39:44.554871 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/899eab19-093a-4328-8eac-20e8e218f094-utilities\") pod \"redhat-marketplace-676r8\" (UID: \"899eab19-093a-4328-8eac-20e8e218f094\") " pod="openshift-marketplace/redhat-marketplace-676r8" Jan 30 08:39:44 crc kubenswrapper[4520]: I0130 08:39:44.657001 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/899eab19-093a-4328-8eac-20e8e218f094-utilities\") pod \"redhat-marketplace-676r8\" (UID: \"899eab19-093a-4328-8eac-20e8e218f094\") " pod="openshift-marketplace/redhat-marketplace-676r8" Jan 30 08:39:44 crc kubenswrapper[4520]: I0130 08:39:44.657144 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/899eab19-093a-4328-8eac-20e8e218f094-catalog-content\") pod \"redhat-marketplace-676r8\" (UID: \"899eab19-093a-4328-8eac-20e8e218f094\") " pod="openshift-marketplace/redhat-marketplace-676r8" Jan 30 08:39:44 crc kubenswrapper[4520]: I0130 08:39:44.657188 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p68f\" (UniqueName: \"kubernetes.io/projected/899eab19-093a-4328-8eac-20e8e218f094-kube-api-access-8p68f\") pod \"redhat-marketplace-676r8\" (UID: \"899eab19-093a-4328-8eac-20e8e218f094\") " pod="openshift-marketplace/redhat-marketplace-676r8" Jan 30 08:39:44 crc kubenswrapper[4520]: I0130 08:39:44.657759 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/899eab19-093a-4328-8eac-20e8e218f094-utilities\") pod \"redhat-marketplace-676r8\" (UID: \"899eab19-093a-4328-8eac-20e8e218f094\") " pod="openshift-marketplace/redhat-marketplace-676r8" Jan 30 08:39:44 crc kubenswrapper[4520]: I0130 08:39:44.657759 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/899eab19-093a-4328-8eac-20e8e218f094-catalog-content\") pod \"redhat-marketplace-676r8\" (UID: \"899eab19-093a-4328-8eac-20e8e218f094\") " pod="openshift-marketplace/redhat-marketplace-676r8" Jan 30 08:39:44 crc kubenswrapper[4520]: I0130 08:39:44.676505 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p68f\" (UniqueName: \"kubernetes.io/projected/899eab19-093a-4328-8eac-20e8e218f094-kube-api-access-8p68f\") pod \"redhat-marketplace-676r8\" (UID: \"899eab19-093a-4328-8eac-20e8e218f094\") " pod="openshift-marketplace/redhat-marketplace-676r8" Jan 30 08:39:44 crc kubenswrapper[4520]: I0130 08:39:44.827803 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-676r8" Jan 30 08:39:45 crc kubenswrapper[4520]: I0130 08:39:45.304128 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-676r8"] Jan 30 08:39:45 crc kubenswrapper[4520]: I0130 08:39:45.804331 4520 generic.go:334] "Generic (PLEG): container finished" podID="899eab19-093a-4328-8eac-20e8e218f094" containerID="25d7ffa469946b05c808cb5237dcf88c9914b7f7894d5f5d89ca4a36c3e49c10" exitCode=0 Jan 30 08:39:45 crc kubenswrapper[4520]: I0130 08:39:45.804407 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-676r8" event={"ID":"899eab19-093a-4328-8eac-20e8e218f094","Type":"ContainerDied","Data":"25d7ffa469946b05c808cb5237dcf88c9914b7f7894d5f5d89ca4a36c3e49c10"} Jan 30 08:39:45 crc kubenswrapper[4520]: I0130 08:39:45.804631 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-676r8" event={"ID":"899eab19-093a-4328-8eac-20e8e218f094","Type":"ContainerStarted","Data":"a945631bfc0a8a12d25daf5ebc71485811dfb7ed35a5077cc39fc81655c0d6ee"} Jan 30 08:39:46 crc kubenswrapper[4520]: I0130 08:39:46.819720 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-676r8" event={"ID":"899eab19-093a-4328-8eac-20e8e218f094","Type":"ContainerStarted","Data":"3d6e1a9f80a836164dca76511f35ef5b8824c579cbb5a13ef8863f5d8ff5d4d7"} Jan 30 08:39:47 crc kubenswrapper[4520]: I0130 08:39:47.686091 4520 scope.go:117] "RemoveContainer" containerID="6f73c66e63b3513012bf2229d38f1a3e0abca4bb0f8764b9d9fe057834d99863" Jan 30 08:39:47 crc kubenswrapper[4520]: E0130 08:39:47.686463 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:39:47 crc kubenswrapper[4520]: I0130 08:39:47.832642 4520 generic.go:334] "Generic (PLEG): container finished" podID="899eab19-093a-4328-8eac-20e8e218f094" containerID="3d6e1a9f80a836164dca76511f35ef5b8824c579cbb5a13ef8863f5d8ff5d4d7" exitCode=0 Jan 30 08:39:47 crc kubenswrapper[4520]: I0130 08:39:47.832708 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-676r8" event={"ID":"899eab19-093a-4328-8eac-20e8e218f094","Type":"ContainerDied","Data":"3d6e1a9f80a836164dca76511f35ef5b8824c579cbb5a13ef8863f5d8ff5d4d7"} Jan 30 08:39:48 crc kubenswrapper[4520]: I0130 08:39:48.846430 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-676r8" event={"ID":"899eab19-093a-4328-8eac-20e8e218f094","Type":"ContainerStarted","Data":"c82a63f31af98053b7ba9ecdb4fdc8376bcc8f8814d6d8e0f7a74e587b7ce764"} Jan 30 08:39:48 crc kubenswrapper[4520]: I0130 08:39:48.873026 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-676r8" podStartSLOduration=2.283272793 podStartE2EDuration="4.873003615s" podCreationTimestamp="2026-01-30 08:39:44 +0000 UTC" firstStartedPulling="2026-01-30 08:39:45.806992263 +0000 UTC m=+6899.435344445" lastFinishedPulling="2026-01-30 08:39:48.396723086 +0000 UTC m=+6902.025075267" observedRunningTime="2026-01-30 08:39:48.862363079 +0000 UTC m=+6902.490715260" watchObservedRunningTime="2026-01-30 08:39:48.873003615 +0000 UTC m=+6902.501355796" Jan 30 08:39:54 crc kubenswrapper[4520]: I0130 08:39:54.828568 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-676r8" Jan 30 08:39:54 crc kubenswrapper[4520]: I0130 08:39:54.830388 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-676r8" Jan 30 08:39:54 crc kubenswrapper[4520]: I0130 08:39:54.872610 4520 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-676r8" Jan 30 08:39:54 crc kubenswrapper[4520]: I0130 08:39:54.983872 4520 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-676r8" Jan 30 08:39:55 crc kubenswrapper[4520]: I0130 08:39:55.115894 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-676r8"] Jan 30 08:39:56 crc kubenswrapper[4520]: I0130 08:39:56.961541 4520 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-676r8" podUID="899eab19-093a-4328-8eac-20e8e218f094" containerName="registry-server" containerID="cri-o://c82a63f31af98053b7ba9ecdb4fdc8376bcc8f8814d6d8e0f7a74e587b7ce764" gracePeriod=2 Jan 30 08:39:57 crc kubenswrapper[4520]: I0130 08:39:57.444562 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-676r8" Jan 30 08:39:57 crc kubenswrapper[4520]: I0130 08:39:57.599571 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/899eab19-093a-4328-8eac-20e8e218f094-catalog-content\") pod \"899eab19-093a-4328-8eac-20e8e218f094\" (UID: \"899eab19-093a-4328-8eac-20e8e218f094\") " Jan 30 08:39:57 crc kubenswrapper[4520]: I0130 08:39:57.599785 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/899eab19-093a-4328-8eac-20e8e218f094-utilities\") pod \"899eab19-093a-4328-8eac-20e8e218f094\" (UID: \"899eab19-093a-4328-8eac-20e8e218f094\") " Jan 30 08:39:57 crc kubenswrapper[4520]: I0130 08:39:57.599871 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p68f\" (UniqueName: \"kubernetes.io/projected/899eab19-093a-4328-8eac-20e8e218f094-kube-api-access-8p68f\") pod \"899eab19-093a-4328-8eac-20e8e218f094\" (UID: \"899eab19-093a-4328-8eac-20e8e218f094\") " Jan 30 08:39:57 crc kubenswrapper[4520]: I0130 08:39:57.601393 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/899eab19-093a-4328-8eac-20e8e218f094-utilities" (OuterVolumeSpecName: "utilities") pod "899eab19-093a-4328-8eac-20e8e218f094" (UID: "899eab19-093a-4328-8eac-20e8e218f094"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:39:57 crc kubenswrapper[4520]: I0130 08:39:57.620603 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/899eab19-093a-4328-8eac-20e8e218f094-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "899eab19-093a-4328-8eac-20e8e218f094" (UID: "899eab19-093a-4328-8eac-20e8e218f094"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:39:57 crc kubenswrapper[4520]: I0130 08:39:57.626132 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/899eab19-093a-4328-8eac-20e8e218f094-kube-api-access-8p68f" (OuterVolumeSpecName: "kube-api-access-8p68f") pod "899eab19-093a-4328-8eac-20e8e218f094" (UID: "899eab19-093a-4328-8eac-20e8e218f094"). InnerVolumeSpecName "kube-api-access-8p68f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:39:57 crc kubenswrapper[4520]: I0130 08:39:57.703880 4520 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/899eab19-093a-4328-8eac-20e8e218f094-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:39:57 crc kubenswrapper[4520]: I0130 08:39:57.703922 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8p68f\" (UniqueName: \"kubernetes.io/projected/899eab19-093a-4328-8eac-20e8e218f094-kube-api-access-8p68f\") on node \"crc\" DevicePath \"\"" Jan 30 08:39:57 crc kubenswrapper[4520]: I0130 08:39:57.703934 4520 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/899eab19-093a-4328-8eac-20e8e218f094-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:39:57 crc kubenswrapper[4520]: I0130 08:39:57.974679 4520 generic.go:334] "Generic (PLEG): container finished" podID="899eab19-093a-4328-8eac-20e8e218f094" containerID="c82a63f31af98053b7ba9ecdb4fdc8376bcc8f8814d6d8e0f7a74e587b7ce764" exitCode=0 Jan 30 08:39:57 crc kubenswrapper[4520]: I0130 08:39:57.974740 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-676r8" event={"ID":"899eab19-093a-4328-8eac-20e8e218f094","Type":"ContainerDied","Data":"c82a63f31af98053b7ba9ecdb4fdc8376bcc8f8814d6d8e0f7a74e587b7ce764"} Jan 30 08:39:57 crc kubenswrapper[4520]: I0130 08:39:57.974779 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-676r8" event={"ID":"899eab19-093a-4328-8eac-20e8e218f094","Type":"ContainerDied","Data":"a945631bfc0a8a12d25daf5ebc71485811dfb7ed35a5077cc39fc81655c0d6ee"} Jan 30 08:39:57 crc kubenswrapper[4520]: I0130 08:39:57.974800 4520 scope.go:117] "RemoveContainer" containerID="c82a63f31af98053b7ba9ecdb4fdc8376bcc8f8814d6d8e0f7a74e587b7ce764" Jan 30 08:39:57 crc kubenswrapper[4520]: I0130 08:39:57.974952 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-676r8" Jan 30 08:39:58 crc kubenswrapper[4520]: I0130 08:39:58.028211 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-676r8"] Jan 30 08:39:58 crc kubenswrapper[4520]: I0130 08:39:58.041180 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-676r8"] Jan 30 08:39:58 crc kubenswrapper[4520]: I0130 08:39:58.042284 4520 scope.go:117] "RemoveContainer" containerID="3d6e1a9f80a836164dca76511f35ef5b8824c579cbb5a13ef8863f5d8ff5d4d7" Jan 30 08:39:58 crc kubenswrapper[4520]: I0130 08:39:58.077592 4520 scope.go:117] "RemoveContainer" containerID="25d7ffa469946b05c808cb5237dcf88c9914b7f7894d5f5d89ca4a36c3e49c10" Jan 30 08:39:58 crc kubenswrapper[4520]: I0130 08:39:58.110778 4520 scope.go:117] "RemoveContainer" containerID="c82a63f31af98053b7ba9ecdb4fdc8376bcc8f8814d6d8e0f7a74e587b7ce764" Jan 30 08:39:58 crc kubenswrapper[4520]: E0130 08:39:58.111390 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c82a63f31af98053b7ba9ecdb4fdc8376bcc8f8814d6d8e0f7a74e587b7ce764\": container with ID starting with c82a63f31af98053b7ba9ecdb4fdc8376bcc8f8814d6d8e0f7a74e587b7ce764 not found: ID does not exist" containerID="c82a63f31af98053b7ba9ecdb4fdc8376bcc8f8814d6d8e0f7a74e587b7ce764" Jan 30 08:39:58 crc kubenswrapper[4520]: I0130 08:39:58.111505 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c82a63f31af98053b7ba9ecdb4fdc8376bcc8f8814d6d8e0f7a74e587b7ce764"} err="failed to get container status \"c82a63f31af98053b7ba9ecdb4fdc8376bcc8f8814d6d8e0f7a74e587b7ce764\": rpc error: code = NotFound desc = could not find container \"c82a63f31af98053b7ba9ecdb4fdc8376bcc8f8814d6d8e0f7a74e587b7ce764\": container with ID starting with c82a63f31af98053b7ba9ecdb4fdc8376bcc8f8814d6d8e0f7a74e587b7ce764 not found: ID does not exist" Jan 30 08:39:58 crc kubenswrapper[4520]: I0130 08:39:58.111548 4520 scope.go:117] "RemoveContainer" containerID="3d6e1a9f80a836164dca76511f35ef5b8824c579cbb5a13ef8863f5d8ff5d4d7" Jan 30 08:39:58 crc kubenswrapper[4520]: E0130 08:39:58.111968 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d6e1a9f80a836164dca76511f35ef5b8824c579cbb5a13ef8863f5d8ff5d4d7\": container with ID starting with 3d6e1a9f80a836164dca76511f35ef5b8824c579cbb5a13ef8863f5d8ff5d4d7 not found: ID does not exist" containerID="3d6e1a9f80a836164dca76511f35ef5b8824c579cbb5a13ef8863f5d8ff5d4d7" Jan 30 08:39:58 crc kubenswrapper[4520]: I0130 08:39:58.112021 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d6e1a9f80a836164dca76511f35ef5b8824c579cbb5a13ef8863f5d8ff5d4d7"} err="failed to get container status \"3d6e1a9f80a836164dca76511f35ef5b8824c579cbb5a13ef8863f5d8ff5d4d7\": rpc error: code = NotFound desc = could not find container \"3d6e1a9f80a836164dca76511f35ef5b8824c579cbb5a13ef8863f5d8ff5d4d7\": container with ID starting with 3d6e1a9f80a836164dca76511f35ef5b8824c579cbb5a13ef8863f5d8ff5d4d7 not found: ID does not exist" Jan 30 08:39:58 crc kubenswrapper[4520]: I0130 08:39:58.112066 4520 scope.go:117] "RemoveContainer" containerID="25d7ffa469946b05c808cb5237dcf88c9914b7f7894d5f5d89ca4a36c3e49c10" Jan 30 08:39:58 crc kubenswrapper[4520]: E0130 08:39:58.114929 4520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25d7ffa469946b05c808cb5237dcf88c9914b7f7894d5f5d89ca4a36c3e49c10\": container with ID starting with 25d7ffa469946b05c808cb5237dcf88c9914b7f7894d5f5d89ca4a36c3e49c10 not found: ID does not exist" containerID="25d7ffa469946b05c808cb5237dcf88c9914b7f7894d5f5d89ca4a36c3e49c10" Jan 30 08:39:58 crc kubenswrapper[4520]: I0130 08:39:58.114967 4520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25d7ffa469946b05c808cb5237dcf88c9914b7f7894d5f5d89ca4a36c3e49c10"} err="failed to get container status \"25d7ffa469946b05c808cb5237dcf88c9914b7f7894d5f5d89ca4a36c3e49c10\": rpc error: code = NotFound desc = could not find container \"25d7ffa469946b05c808cb5237dcf88c9914b7f7894d5f5d89ca4a36c3e49c10\": container with ID starting with 25d7ffa469946b05c808cb5237dcf88c9914b7f7894d5f5d89ca4a36c3e49c10 not found: ID does not exist" Jan 30 08:39:58 crc kubenswrapper[4520]: I0130 08:39:58.705990 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="899eab19-093a-4328-8eac-20e8e218f094" path="/var/lib/kubelet/pods/899eab19-093a-4328-8eac-20e8e218f094/volumes" Jan 30 08:40:02 crc kubenswrapper[4520]: I0130 08:40:02.686246 4520 scope.go:117] "RemoveContainer" containerID="6f73c66e63b3513012bf2229d38f1a3e0abca4bb0f8764b9d9fe057834d99863" Jan 30 08:40:02 crc kubenswrapper[4520]: E0130 08:40:02.687142 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:40:15 crc kubenswrapper[4520]: I0130 08:40:15.685932 4520 scope.go:117] "RemoveContainer" containerID="6f73c66e63b3513012bf2229d38f1a3e0abca4bb0f8764b9d9fe057834d99863" Jan 30 08:40:15 crc kubenswrapper[4520]: E0130 08:40:15.686936 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:40:30 crc kubenswrapper[4520]: I0130 08:40:30.685627 4520 scope.go:117] "RemoveContainer" containerID="6f73c66e63b3513012bf2229d38f1a3e0abca4bb0f8764b9d9fe057834d99863" Jan 30 08:40:30 crc kubenswrapper[4520]: E0130 08:40:30.686555 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:40:44 crc kubenswrapper[4520]: I0130 08:40:44.685905 4520 scope.go:117] "RemoveContainer" containerID="6f73c66e63b3513012bf2229d38f1a3e0abca4bb0f8764b9d9fe057834d99863" Jan 30 08:40:44 crc kubenswrapper[4520]: E0130 08:40:44.686855 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:40:57 crc kubenswrapper[4520]: I0130 08:40:57.686628 4520 scope.go:117] "RemoveContainer" containerID="6f73c66e63b3513012bf2229d38f1a3e0abca4bb0f8764b9d9fe057834d99863" Jan 30 08:40:57 crc kubenswrapper[4520]: E0130 08:40:57.687576 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:41:12 crc kubenswrapper[4520]: I0130 08:41:12.685690 4520 scope.go:117] "RemoveContainer" containerID="6f73c66e63b3513012bf2229d38f1a3e0abca4bb0f8764b9d9fe057834d99863" Jan 30 08:41:12 crc kubenswrapper[4520]: E0130 08:41:12.686509 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:41:26 crc kubenswrapper[4520]: I0130 08:41:26.694100 4520 scope.go:117] "RemoveContainer" containerID="6f73c66e63b3513012bf2229d38f1a3e0abca4bb0f8764b9d9fe057834d99863" Jan 30 08:41:26 crc kubenswrapper[4520]: E0130 08:41:26.695065 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:41:40 crc kubenswrapper[4520]: I0130 08:41:40.686243 4520 scope.go:117] "RemoveContainer" containerID="6f73c66e63b3513012bf2229d38f1a3e0abca4bb0f8764b9d9fe057834d99863" Jan 30 08:41:40 crc kubenswrapper[4520]: E0130 08:41:40.687011 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:41:51 crc kubenswrapper[4520]: I0130 08:41:51.686293 4520 scope.go:117] "RemoveContainer" containerID="6f73c66e63b3513012bf2229d38f1a3e0abca4bb0f8764b9d9fe057834d99863" Jan 30 08:41:51 crc kubenswrapper[4520]: E0130 08:41:51.687259 4520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dkqtt_openshift-machine-config-operator(e5f51275-c0b1-4467-bf4a-ef848e3521df)\"" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" Jan 30 08:42:02 crc kubenswrapper[4520]: I0130 08:42:02.686747 4520 scope.go:117] "RemoveContainer" containerID="6f73c66e63b3513012bf2229d38f1a3e0abca4bb0f8764b9d9fe057834d99863" Jan 30 08:42:03 crc kubenswrapper[4520]: I0130 08:42:03.153916 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" event={"ID":"e5f51275-c0b1-4467-bf4a-ef848e3521df","Type":"ContainerStarted","Data":"66585b3f350288b36d2dd27c6d1a41ed919049f27d0a980b9b56854e451490d1"} Jan 30 08:43:11 crc kubenswrapper[4520]: I0130 08:43:11.734762 4520 generic.go:334] "Generic (PLEG): container finished" podID="68266a47-8812-40f3-bd46-d1ee8d55def1" containerID="bc080649353fb05e55ea0f671358532e59ff76f49f05f60501c06e43a7a2d68b" exitCode=1 Jan 30 08:43:11 crc kubenswrapper[4520]: I0130 08:43:11.734790 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"68266a47-8812-40f3-bd46-d1ee8d55def1","Type":"ContainerDied","Data":"bc080649353fb05e55ea0f671358532e59ff76f49f05f60501c06e43a7a2d68b"} Jan 30 08:43:13 crc kubenswrapper[4520]: I0130 08:43:13.503284 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 08:43:13 crc kubenswrapper[4520]: I0130 08:43:13.511799 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpl4b\" (UniqueName: \"kubernetes.io/projected/68266a47-8812-40f3-bd46-d1ee8d55def1-kube-api-access-xpl4b\") pod \"68266a47-8812-40f3-bd46-d1ee8d55def1\" (UID: \"68266a47-8812-40f3-bd46-d1ee8d55def1\") " Jan 30 08:43:13 crc kubenswrapper[4520]: I0130 08:43:13.522339 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68266a47-8812-40f3-bd46-d1ee8d55def1-kube-api-access-xpl4b" (OuterVolumeSpecName: "kube-api-access-xpl4b") pod "68266a47-8812-40f3-bd46-d1ee8d55def1" (UID: "68266a47-8812-40f3-bd46-d1ee8d55def1"). InnerVolumeSpecName "kube-api-access-xpl4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:43:13 crc kubenswrapper[4520]: I0130 08:43:13.615043 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"68266a47-8812-40f3-bd46-d1ee8d55def1\" (UID: \"68266a47-8812-40f3-bd46-d1ee8d55def1\") " Jan 30 08:43:13 crc kubenswrapper[4520]: I0130 08:43:13.615676 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/68266a47-8812-40f3-bd46-d1ee8d55def1-test-operator-ephemeral-temporary\") pod \"68266a47-8812-40f3-bd46-d1ee8d55def1\" (UID: \"68266a47-8812-40f3-bd46-d1ee8d55def1\") " Jan 30 08:43:13 crc kubenswrapper[4520]: I0130 08:43:13.615878 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/68266a47-8812-40f3-bd46-d1ee8d55def1-ssh-key\") pod \"68266a47-8812-40f3-bd46-d1ee8d55def1\" (UID: \"68266a47-8812-40f3-bd46-d1ee8d55def1\") " Jan 30 08:43:13 crc kubenswrapper[4520]: I0130 08:43:13.615905 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/68266a47-8812-40f3-bd46-d1ee8d55def1-config-data\") pod \"68266a47-8812-40f3-bd46-d1ee8d55def1\" (UID: \"68266a47-8812-40f3-bd46-d1ee8d55def1\") " Jan 30 08:43:13 crc kubenswrapper[4520]: I0130 08:43:13.616077 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/68266a47-8812-40f3-bd46-d1ee8d55def1-ca-certs\") pod \"68266a47-8812-40f3-bd46-d1ee8d55def1\" (UID: \"68266a47-8812-40f3-bd46-d1ee8d55def1\") " Jan 30 08:43:13 crc kubenswrapper[4520]: I0130 08:43:13.616155 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/68266a47-8812-40f3-bd46-d1ee8d55def1-test-operator-ephemeral-workdir\") pod \"68266a47-8812-40f3-bd46-d1ee8d55def1\" (UID: \"68266a47-8812-40f3-bd46-d1ee8d55def1\") " Jan 30 08:43:13 crc kubenswrapper[4520]: I0130 08:43:13.616164 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68266a47-8812-40f3-bd46-d1ee8d55def1-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "68266a47-8812-40f3-bd46-d1ee8d55def1" (UID: "68266a47-8812-40f3-bd46-d1ee8d55def1"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:43:13 crc kubenswrapper[4520]: I0130 08:43:13.616177 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/68266a47-8812-40f3-bd46-d1ee8d55def1-openstack-config-secret\") pod \"68266a47-8812-40f3-bd46-d1ee8d55def1\" (UID: \"68266a47-8812-40f3-bd46-d1ee8d55def1\") " Jan 30 08:43:13 crc kubenswrapper[4520]: I0130 08:43:13.616257 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/68266a47-8812-40f3-bd46-d1ee8d55def1-openstack-config\") pod \"68266a47-8812-40f3-bd46-d1ee8d55def1\" (UID: \"68266a47-8812-40f3-bd46-d1ee8d55def1\") " Jan 30 08:43:13 crc kubenswrapper[4520]: I0130 08:43:13.617224 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpl4b\" (UniqueName: \"kubernetes.io/projected/68266a47-8812-40f3-bd46-d1ee8d55def1-kube-api-access-xpl4b\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:13 crc kubenswrapper[4520]: I0130 08:43:13.617244 4520 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/68266a47-8812-40f3-bd46-d1ee8d55def1-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:13 crc kubenswrapper[4520]: I0130 08:43:13.617413 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68266a47-8812-40f3-bd46-d1ee8d55def1-config-data" (OuterVolumeSpecName: "config-data") pod "68266a47-8812-40f3-bd46-d1ee8d55def1" (UID: "68266a47-8812-40f3-bd46-d1ee8d55def1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:43:13 crc kubenswrapper[4520]: I0130 08:43:13.620036 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68266a47-8812-40f3-bd46-d1ee8d55def1-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "68266a47-8812-40f3-bd46-d1ee8d55def1" (UID: "68266a47-8812-40f3-bd46-d1ee8d55def1"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:43:13 crc kubenswrapper[4520]: I0130 08:43:13.622158 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "test-operator-logs") pod "68266a47-8812-40f3-bd46-d1ee8d55def1" (UID: "68266a47-8812-40f3-bd46-d1ee8d55def1"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 08:43:13 crc kubenswrapper[4520]: I0130 08:43:13.645867 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68266a47-8812-40f3-bd46-d1ee8d55def1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "68266a47-8812-40f3-bd46-d1ee8d55def1" (UID: "68266a47-8812-40f3-bd46-d1ee8d55def1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:43:13 crc kubenswrapper[4520]: I0130 08:43:13.646036 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68266a47-8812-40f3-bd46-d1ee8d55def1-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "68266a47-8812-40f3-bd46-d1ee8d55def1" (UID: "68266a47-8812-40f3-bd46-d1ee8d55def1"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:43:13 crc kubenswrapper[4520]: I0130 08:43:13.648323 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68266a47-8812-40f3-bd46-d1ee8d55def1-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "68266a47-8812-40f3-bd46-d1ee8d55def1" (UID: "68266a47-8812-40f3-bd46-d1ee8d55def1"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:43:13 crc kubenswrapper[4520]: I0130 08:43:13.664758 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68266a47-8812-40f3-bd46-d1ee8d55def1-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "68266a47-8812-40f3-bd46-d1ee8d55def1" (UID: "68266a47-8812-40f3-bd46-d1ee8d55def1"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:43:13 crc kubenswrapper[4520]: I0130 08:43:13.719103 4520 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/68266a47-8812-40f3-bd46-d1ee8d55def1-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:13 crc kubenswrapper[4520]: I0130 08:43:13.719137 4520 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/68266a47-8812-40f3-bd46-d1ee8d55def1-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:13 crc kubenswrapper[4520]: I0130 08:43:13.719148 4520 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/68266a47-8812-40f3-bd46-d1ee8d55def1-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:13 crc kubenswrapper[4520]: I0130 08:43:13.719160 4520 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/68266a47-8812-40f3-bd46-d1ee8d55def1-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:13 crc kubenswrapper[4520]: I0130 08:43:13.719176 4520 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/68266a47-8812-40f3-bd46-d1ee8d55def1-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:13 crc kubenswrapper[4520]: I0130 08:43:13.719187 4520 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/68266a47-8812-40f3-bd46-d1ee8d55def1-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:13 crc kubenswrapper[4520]: I0130 08:43:13.719224 4520 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 30 08:43:13 crc kubenswrapper[4520]: I0130 08:43:13.737352 4520 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 30 08:43:13 crc kubenswrapper[4520]: I0130 08:43:13.753147 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"68266a47-8812-40f3-bd46-d1ee8d55def1","Type":"ContainerDied","Data":"a2ac20f40da2cb4739530cc65beca67c26dd9143de4dbe03a346882e7989d7ff"} Jan 30 08:43:13 crc kubenswrapper[4520]: I0130 08:43:13.753203 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 08:43:13 crc kubenswrapper[4520]: I0130 08:43:13.753205 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2ac20f40da2cb4739530cc65beca67c26dd9143de4dbe03a346882e7989d7ff" Jan 30 08:43:13 crc kubenswrapper[4520]: I0130 08:43:13.821543 4520 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:21 crc kubenswrapper[4520]: I0130 08:43:21.029215 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 30 08:43:21 crc kubenswrapper[4520]: E0130 08:43:21.034928 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68266a47-8812-40f3-bd46-d1ee8d55def1" containerName="tempest-tests-tempest-tests-runner" Jan 30 08:43:21 crc kubenswrapper[4520]: I0130 08:43:21.034976 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="68266a47-8812-40f3-bd46-d1ee8d55def1" containerName="tempest-tests-tempest-tests-runner" Jan 30 08:43:21 crc kubenswrapper[4520]: E0130 08:43:21.035200 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899eab19-093a-4328-8eac-20e8e218f094" containerName="registry-server" Jan 30 08:43:21 crc kubenswrapper[4520]: I0130 08:43:21.035218 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="899eab19-093a-4328-8eac-20e8e218f094" containerName="registry-server" Jan 30 08:43:21 crc kubenswrapper[4520]: E0130 08:43:21.035240 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899eab19-093a-4328-8eac-20e8e218f094" containerName="extract-content" Jan 30 08:43:21 crc kubenswrapper[4520]: I0130 08:43:21.035246 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="899eab19-093a-4328-8eac-20e8e218f094" containerName="extract-content" Jan 30 08:43:21 crc kubenswrapper[4520]: E0130 08:43:21.035267 4520 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899eab19-093a-4328-8eac-20e8e218f094" containerName="extract-utilities" Jan 30 08:43:21 crc kubenswrapper[4520]: I0130 08:43:21.035273 4520 state_mem.go:107] "Deleted CPUSet assignment" podUID="899eab19-093a-4328-8eac-20e8e218f094" containerName="extract-utilities" Jan 30 08:43:21 crc kubenswrapper[4520]: I0130 08:43:21.036020 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="68266a47-8812-40f3-bd46-d1ee8d55def1" containerName="tempest-tests-tempest-tests-runner" Jan 30 08:43:21 crc kubenswrapper[4520]: I0130 08:43:21.036045 4520 memory_manager.go:354] "RemoveStaleState removing state" podUID="899eab19-093a-4328-8eac-20e8e218f094" containerName="registry-server" Jan 30 08:43:21 crc kubenswrapper[4520]: I0130 08:43:21.042359 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 08:43:21 crc kubenswrapper[4520]: I0130 08:43:21.051144 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 30 08:43:21 crc kubenswrapper[4520]: I0130 08:43:21.053235 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-r5wqx" Jan 30 08:43:21 crc kubenswrapper[4520]: I0130 08:43:21.187587 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3ca95e6c-b73a-478a-bff0-111d4924f066\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 08:43:21 crc kubenswrapper[4520]: I0130 08:43:21.187722 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8qz6\" (UniqueName: \"kubernetes.io/projected/3ca95e6c-b73a-478a-bff0-111d4924f066-kube-api-access-h8qz6\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3ca95e6c-b73a-478a-bff0-111d4924f066\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 08:43:21 crc kubenswrapper[4520]: I0130 08:43:21.289591 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3ca95e6c-b73a-478a-bff0-111d4924f066\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 08:43:21 crc kubenswrapper[4520]: I0130 08:43:21.289682 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8qz6\" (UniqueName: \"kubernetes.io/projected/3ca95e6c-b73a-478a-bff0-111d4924f066-kube-api-access-h8qz6\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3ca95e6c-b73a-478a-bff0-111d4924f066\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 08:43:21 crc kubenswrapper[4520]: I0130 08:43:21.292396 4520 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3ca95e6c-b73a-478a-bff0-111d4924f066\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 08:43:21 crc kubenswrapper[4520]: I0130 08:43:21.313855 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8qz6\" (UniqueName: \"kubernetes.io/projected/3ca95e6c-b73a-478a-bff0-111d4924f066-kube-api-access-h8qz6\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3ca95e6c-b73a-478a-bff0-111d4924f066\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 08:43:21 crc kubenswrapper[4520]: I0130 08:43:21.318900 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3ca95e6c-b73a-478a-bff0-111d4924f066\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 08:43:21 crc kubenswrapper[4520]: I0130 08:43:21.372770 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 08:43:22 crc kubenswrapper[4520]: I0130 08:43:21.843477 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 30 08:43:22 crc kubenswrapper[4520]: I0130 08:43:21.858994 4520 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 08:43:22 crc kubenswrapper[4520]: I0130 08:43:22.829425 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"3ca95e6c-b73a-478a-bff0-111d4924f066","Type":"ContainerStarted","Data":"b81f2c1bbf11e9dd7da8e4f0c54a359842092fc880f562b6a0d29a7119a7a21d"} Jan 30 08:43:23 crc kubenswrapper[4520]: I0130 08:43:23.849194 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"3ca95e6c-b73a-478a-bff0-111d4924f066","Type":"ContainerStarted","Data":"7e62136e0c955ec7ff8e27dbc224c0624e0faa48f7aa1519e0ee039e2b962855"} Jan 30 08:43:23 crc kubenswrapper[4520]: I0130 08:43:23.866215 4520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.791666347 podStartE2EDuration="3.86617998s" podCreationTimestamp="2026-01-30 08:43:20 +0000 UTC" firstStartedPulling="2026-01-30 08:43:21.854865521 +0000 UTC m=+7115.483217692" lastFinishedPulling="2026-01-30 08:43:22.929379143 +0000 UTC m=+7116.557731325" observedRunningTime="2026-01-30 08:43:23.864484622 +0000 UTC m=+7117.492836802" watchObservedRunningTime="2026-01-30 08:43:23.86617998 +0000 UTC m=+7117.494532161" Jan 30 08:44:27 crc kubenswrapper[4520]: I0130 08:44:27.793599 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:44:27 crc kubenswrapper[4520]: I0130 08:44:27.794385 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:44:57 crc kubenswrapper[4520]: I0130 08:44:57.793972 4520 patch_prober.go:28] interesting pod/machine-config-daemon-dkqtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:44:57 crc kubenswrapper[4520]: I0130 08:44:57.794473 4520 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dkqtt" podUID="e5f51275-c0b1-4467-bf4a-ef848e3521df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:45:00 crc kubenswrapper[4520]: I0130 08:45:00.337950 4520 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496045-zcjnn"] Jan 30 08:45:00 crc kubenswrapper[4520]: I0130 08:45:00.340722 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-zcjnn" Jan 30 08:45:00 crc kubenswrapper[4520]: I0130 08:45:00.344118 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b2caa1d5-c347-49fe-a1da-83254d64b512-secret-volume\") pod \"collect-profiles-29496045-zcjnn\" (UID: \"b2caa1d5-c347-49fe-a1da-83254d64b512\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-zcjnn" Jan 30 08:45:00 crc kubenswrapper[4520]: I0130 08:45:00.344737 4520 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 08:45:00 crc kubenswrapper[4520]: I0130 08:45:00.344740 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2caa1d5-c347-49fe-a1da-83254d64b512-config-volume\") pod \"collect-profiles-29496045-zcjnn\" (UID: \"b2caa1d5-c347-49fe-a1da-83254d64b512\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-zcjnn" Jan 30 08:45:00 crc kubenswrapper[4520]: I0130 08:45:00.344824 4520 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxbhm\" (UniqueName: \"kubernetes.io/projected/b2caa1d5-c347-49fe-a1da-83254d64b512-kube-api-access-kxbhm\") pod \"collect-profiles-29496045-zcjnn\" (UID: \"b2caa1d5-c347-49fe-a1da-83254d64b512\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-zcjnn" Jan 30 08:45:00 crc kubenswrapper[4520]: I0130 08:45:00.345670 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496045-zcjnn"] Jan 30 08:45:00 crc kubenswrapper[4520]: I0130 08:45:00.346346 4520 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 08:45:00 crc kubenswrapper[4520]: I0130 08:45:00.448661 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2caa1d5-c347-49fe-a1da-83254d64b512-config-volume\") pod \"collect-profiles-29496045-zcjnn\" (UID: \"b2caa1d5-c347-49fe-a1da-83254d64b512\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-zcjnn" Jan 30 08:45:00 crc kubenswrapper[4520]: I0130 08:45:00.448817 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxbhm\" (UniqueName: \"kubernetes.io/projected/b2caa1d5-c347-49fe-a1da-83254d64b512-kube-api-access-kxbhm\") pod \"collect-profiles-29496045-zcjnn\" (UID: \"b2caa1d5-c347-49fe-a1da-83254d64b512\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-zcjnn" Jan 30 08:45:00 crc kubenswrapper[4520]: I0130 08:45:00.448835 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2caa1d5-c347-49fe-a1da-83254d64b512-config-volume\") pod \"collect-profiles-29496045-zcjnn\" (UID: \"b2caa1d5-c347-49fe-a1da-83254d64b512\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-zcjnn" Jan 30 08:45:00 crc kubenswrapper[4520]: I0130 08:45:00.450005 4520 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b2caa1d5-c347-49fe-a1da-83254d64b512-secret-volume\") pod \"collect-profiles-29496045-zcjnn\" (UID: \"b2caa1d5-c347-49fe-a1da-83254d64b512\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-zcjnn" Jan 30 08:45:00 crc kubenswrapper[4520]: I0130 08:45:00.461356 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b2caa1d5-c347-49fe-a1da-83254d64b512-secret-volume\") pod \"collect-profiles-29496045-zcjnn\" (UID: \"b2caa1d5-c347-49fe-a1da-83254d64b512\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-zcjnn" Jan 30 08:45:00 crc kubenswrapper[4520]: I0130 08:45:00.464163 4520 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxbhm\" (UniqueName: \"kubernetes.io/projected/b2caa1d5-c347-49fe-a1da-83254d64b512-kube-api-access-kxbhm\") pod \"collect-profiles-29496045-zcjnn\" (UID: \"b2caa1d5-c347-49fe-a1da-83254d64b512\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-zcjnn" Jan 30 08:45:00 crc kubenswrapper[4520]: I0130 08:45:00.670391 4520 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-zcjnn" Jan 30 08:45:01 crc kubenswrapper[4520]: I0130 08:45:01.167317 4520 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496045-zcjnn"] Jan 30 08:45:01 crc kubenswrapper[4520]: I0130 08:45:01.915906 4520 generic.go:334] "Generic (PLEG): container finished" podID="b2caa1d5-c347-49fe-a1da-83254d64b512" containerID="b4cb1058875e9c5a1554328b6684310fcfd6a8df45b83f50cdc54cee45c20dde" exitCode=0 Jan 30 08:45:01 crc kubenswrapper[4520]: I0130 08:45:01.916008 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-zcjnn" event={"ID":"b2caa1d5-c347-49fe-a1da-83254d64b512","Type":"ContainerDied","Data":"b4cb1058875e9c5a1554328b6684310fcfd6a8df45b83f50cdc54cee45c20dde"} Jan 30 08:45:01 crc kubenswrapper[4520]: I0130 08:45:01.916378 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-zcjnn" event={"ID":"b2caa1d5-c347-49fe-a1da-83254d64b512","Type":"ContainerStarted","Data":"7ade62429d89ea02ef34ebd622f4f08ecd370ceb7255e4121a22a041dcf0c323"} Jan 30 08:45:03 crc kubenswrapper[4520]: I0130 08:45:03.246999 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-zcjnn" Jan 30 08:45:03 crc kubenswrapper[4520]: I0130 08:45:03.316054 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxbhm\" (UniqueName: \"kubernetes.io/projected/b2caa1d5-c347-49fe-a1da-83254d64b512-kube-api-access-kxbhm\") pod \"b2caa1d5-c347-49fe-a1da-83254d64b512\" (UID: \"b2caa1d5-c347-49fe-a1da-83254d64b512\") " Jan 30 08:45:03 crc kubenswrapper[4520]: I0130 08:45:03.316396 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b2caa1d5-c347-49fe-a1da-83254d64b512-secret-volume\") pod \"b2caa1d5-c347-49fe-a1da-83254d64b512\" (UID: \"b2caa1d5-c347-49fe-a1da-83254d64b512\") " Jan 30 08:45:03 crc kubenswrapper[4520]: I0130 08:45:03.317102 4520 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2caa1d5-c347-49fe-a1da-83254d64b512-config-volume\") pod \"b2caa1d5-c347-49fe-a1da-83254d64b512\" (UID: \"b2caa1d5-c347-49fe-a1da-83254d64b512\") " Jan 30 08:45:03 crc kubenswrapper[4520]: I0130 08:45:03.317850 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2caa1d5-c347-49fe-a1da-83254d64b512-config-volume" (OuterVolumeSpecName: "config-volume") pod "b2caa1d5-c347-49fe-a1da-83254d64b512" (UID: "b2caa1d5-c347-49fe-a1da-83254d64b512"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:45:03 crc kubenswrapper[4520]: I0130 08:45:03.318161 4520 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2caa1d5-c347-49fe-a1da-83254d64b512-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 08:45:03 crc kubenswrapper[4520]: I0130 08:45:03.324390 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2caa1d5-c347-49fe-a1da-83254d64b512-kube-api-access-kxbhm" (OuterVolumeSpecName: "kube-api-access-kxbhm") pod "b2caa1d5-c347-49fe-a1da-83254d64b512" (UID: "b2caa1d5-c347-49fe-a1da-83254d64b512"). InnerVolumeSpecName "kube-api-access-kxbhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:45:03 crc kubenswrapper[4520]: I0130 08:45:03.324562 4520 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2caa1d5-c347-49fe-a1da-83254d64b512-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b2caa1d5-c347-49fe-a1da-83254d64b512" (UID: "b2caa1d5-c347-49fe-a1da-83254d64b512"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:45:03 crc kubenswrapper[4520]: I0130 08:45:03.420788 4520 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxbhm\" (UniqueName: \"kubernetes.io/projected/b2caa1d5-c347-49fe-a1da-83254d64b512-kube-api-access-kxbhm\") on node \"crc\" DevicePath \"\"" Jan 30 08:45:03 crc kubenswrapper[4520]: I0130 08:45:03.420832 4520 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b2caa1d5-c347-49fe-a1da-83254d64b512-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 08:45:03 crc kubenswrapper[4520]: I0130 08:45:03.938133 4520 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-zcjnn" event={"ID":"b2caa1d5-c347-49fe-a1da-83254d64b512","Type":"ContainerDied","Data":"7ade62429d89ea02ef34ebd622f4f08ecd370ceb7255e4121a22a041dcf0c323"} Jan 30 08:45:03 crc kubenswrapper[4520]: I0130 08:45:03.938222 4520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ade62429d89ea02ef34ebd622f4f08ecd370ceb7255e4121a22a041dcf0c323" Jan 30 08:45:03 crc kubenswrapper[4520]: I0130 08:45:03.938162 4520 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-zcjnn" Jan 30 08:45:04 crc kubenswrapper[4520]: I0130 08:45:04.331172 4520 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496000-dxf7n"] Jan 30 08:45:04 crc kubenswrapper[4520]: I0130 08:45:04.341084 4520 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496000-dxf7n"] Jan 30 08:45:04 crc kubenswrapper[4520]: I0130 08:45:04.697762 4520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34c79b19-041d-49f3-b54f-7e91d60f0439" path="/var/lib/kubelet/pods/34c79b19-041d-49f3-b54f-7e91d60f0439/volumes"